Science.gov

Sample records for 3d face representations

  1. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  2. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.

  3. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  4. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  5. Cooperativity and 3-D Representation

    DTIC Science & Technology

    1993-02-28

    image, to simplified mechanisms for understandingshadows and shading and to renewed interest in " isophot " models of shading. Visual searchstudies have...reversals of contrast. One such representation is the isophots of the images, the lines of equal luminance. They capture the flow field of the brightness...shading as an oriented field of isophots (or at least short oriented segments) is still at an exploratory stage. We will digitize live scenes in our

  6. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  7. 3D face recognition by projection-based methods

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Sankur, Bülent; Yemez, Yücel

    2006-02-01

    In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. Some features are data driven, such as ICA-based features or NNMF-based features. Other features are obtained using DFT or DCT-based schemes. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis. Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset.

  8. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer.

  9. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  10. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  11. Representation and classification of 3-D objects.

    PubMed

    Csakany, P; Wallace, A M

    2003-01-01

    This paper addresses the problem of generic object classification from three-dimensional depth or meshed data. First, surface patches are segmented on the basis of differential geometry and quadratic surface fitting. These are represented by a modified Gaussian image that includes the well-known shape index. Learning is an interactive process in which a human teacher indicates corresponding patches, but the formation of generic classes is unaided. Classification of unknown objects is based on the measurement of similarities between feature sets of the objects and the generic classes. The process is demonstrated on a group of three-dimensional (3-D) objects built from both CAD and laser-scanned depth data.

  12. 3D face database for human pattern recognition

    NASA Astrophysics Data System (ADS)

    Song, LiMei; Lu, Lu

    2008-10-01

    Face recognition is an essential work to ensure human safety. It is also an important task in biomedical engineering. 2D image is not enough for precision face recognition. 3D face data includes more exact information, such as the precision size of eyes, mouth, etc. 3D face database is an important part in human pattern recognition. There is a lot of method to get 3D data, such as 3D laser scan system, 3D phase measurement, shape from shading, shape from motion, etc. This paper will introduce a non-orbit, non-contact, non-laser 3D measurement system. The main idea is from shape from stereo technique. Two cameras are used in different angle. A sequence of light will project on the face. Human face, human head, human tooth, human body can all be measured by the system. The visualization data of each person can form to a large 3D face database, which can be used in human recognition. The 3D data can provide a vivid copy of a face, so the recognition exactness can be reached to 100%. Although the 3D data is larger than 2D image, it can be used in the occasion where only few people include, such as the recognition of a family, a small company, etc.

  13. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  14. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  15. Formal representation of 3D structural geological models

    NASA Astrophysics Data System (ADS)

    Wang, Zhangang; Qu, Honggang; Wu, Zixing; Yang, Hongjun; Du, Qunle

    2016-05-01

    The development and widespread application of geological modeling methods has increased demands for the integration and sharing services of three dimensional (3D) geological data. However, theoretical research in the field of geological information sciences is limited despite the widespread use of Geographic Information Systems (GIS) in geology. In particular, fundamental research on the formal representations and standardized spatial descriptions of 3D structural models is required. This is necessary for accurate understanding and further applications of geological data in 3D space. In this paper, we propose a formal representation method for 3D structural models using the theory of point set topology, which produces a mathematical definition for the major types of geological objects. The spatial relationships between geologic boundaries, structures, and units are explained in detail using the 9-intersection model. Reasonable conditions for describing the topological space of 3D structural models are also provided. The results from this study can be used as potential support for the standardized representation and spatial quality evaluation of 3D structural models, as well as for specific needs related to model-based management, query, and analysis.

  16. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  17. Use of 3D faces facilitates facial expression recognition in children

    PubMed Central

    Wang, Lamei; Chen, Wenfeng; Li, Hong

    2017-01-01

    This study assessed whether presenting 3D face stimuli could facilitate children’s facial expression recognition. Seventy-one children aged between 3 and 6 participated in the study. Their task was to judge whether a face presented in each trial showed a happy or fearful expression. Half of the face stimuli were shown with 3D representations, whereas the other half of the images were shown as 2D pictures. We compared expression recognition under these conditions. The results showed that the use of 3D faces improved the speed of facial expression recognition in both boys and girls. Moreover, 3D faces improved boys’ recognition accuracy for fearful expressions. Since fear is the most difficult facial expression for children to recognize, the facilitation effect of 3D faces has important practical implications for children with difficulties in facial expression recognition. The potential benefits of 3D representation for other expressions also have implications for developing more realistic assessments of children’s expression recognition. PMID:28368008

  18. Appearance-based color face recognition with 3D model

    NASA Astrophysics Data System (ADS)

    Wang, Chengzhang; Bai, Xiaoming

    2013-03-01

    Appearance-based face recognition approaches explore color cues of face images, i.e. grey or color information for recognition task. They first encode color face images, and then extract facial features for classification. Similar to conventional singular value decomposition, hypercomplex matrix also exists singular value decomposition on hypercomplex field. In this paper, a novel color face recognition approach based on hypercomplex singular value decomposition is proposed. The approach employs hypercomplex to encode color face information of different channels simultaneously. Hypercomplex singular value decomposition is utilized then to compute the basis vectors of the color face subspace. To improve learning efficiency of the algorithm, 3D active deformable model is exploited to generate virtual face images. Color face samples are projected onto the subspace and projection coefficients are utilized as facial features. Experimental results on CMU PIE face database verify the effectiveness of the proposed approach.

  19. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  20. Establishing point correspondence of 3D faces via sparse facial deformable model.

    PubMed

    Pan, Gang; Zhang, Xiaobo; Wang, Yueming; Hu, Zhenfang; Zheng, Xiaoxiang; Wu, Zhaohui

    2013-11-01

    Establishing a dense vertex-to-vertex anthropometric correspondence between 3D faces is an important and fundamental problem in 3D face research, which can contribute to most applications of 3D faces. This paper proposes a sparse facial deformable model to automatically achieve this task. For an input 3D face, the basic idea is to generate a new 3D face that has the same mesh topology as a reference face and the highly similar shape to the input face, and whose vertices correspond to those of the reference face in an anthropometric sense. Two constraints: 1) the shape constraint and 2) correspondence constraint are modeled in our method to satisfy the three requirements. The shape constraint is solved by a novel face deformation approach in which a normal-ray scheme is integrated to the closest-vertex scheme to keep high-curvature shapes in deformation. The correspondence constraint is based on an assumption that if the vertices on 3D faces are corresponded, their shape signals lie on a manifold and each face signal can be represented sparsely by a few typical items in a dictionary. The dictionary can be well learnt and contains the distribution information of the corresponded vertices. The correspondence information can be conveyed to the sparse representation of the generated 3D face. Thus, a patch-based sparse representation is proposed as the correspondence constraint. By solving the correspondence constraint iteratively, the vertices of the generated face can be adjusted to correspondence positions gradually. At the early iteration steps, smaller sparsity thresholds are set that yield larger representation errors but better globally corresponded vertices. At the later steps, relatively larger sparsity thresholds are used to encode local shapes. By this method, the vertices in the new face approach the right positions progressively until the final global correspondence is reached. Our method is automatic, and the manual work is needed only in training procedure

  1. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  2. Adaptive 3D Face Reconstruction from Unconstrained Photo Collections.

    PubMed

    Roth, Joseph; Tong, Yiying; Liu, Xiaoming

    2016-12-07

    Given a photo collection of "unconstrained" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.

  3. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  4. 3D Face Hallucination from a Single Depth Frame

    PubMed Central

    Liang, Shu; Kemelmacher-Shlizerman, Ira; Shapiro, Linda G.

    2015-01-01

    We present an algorithm that takes a single frame of a person’s face from a depth camera, e.g., Kinect, and produces a high-resolution 3D mesh of the input face. We leverage a dataset of 3D face meshes of 1204 distinct individuals ranging from age 3 to 40, captured in a neutral expression. We divide the input depth frame into semantically significant regions (eyes, nose, mouth, cheeks) and search the database for the best matching shape per region. We further combine the input depth frame with the matched database shapes into a single mesh that results in a highresolution shape of the input person. Our system is fully automatic and uses only depth data for matching, making it invariant to imaging conditions. We evaluate our results using ground truth shapes, as well as compare to state-of-the-art shape estimation methods. We demonstrate the robustness of our local matching approach with high-quality reconstruction of faces that fall outside of the dataset span, e.g., faces older than 40 years old, facial expressions, and different ethnicities. PMID:26280021

  5. Pose invariant face recognition: 3D model from single photo

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Alfalou, Ayman

    2017-02-01

    Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.

  6. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  7. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  8. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  9. A prescreener for 3D face recognition using radial symmerty and the Hausdorff fraction.

    SciTech Connect

    Koudelka, Melissa L.; Koch, Mark William; Russ, Trina Denise

    2005-04-01

    Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to 'prescreen' face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

  10. The Fermion Representation of Quantum Toroidal Algebra on 3D Young Diagrams

    NASA Astrophysics Data System (ADS)

    Cai, Li-Qiang; Wang, Li-Fang; Wu, Ke; Yang, Jie

    2014-07-01

    We develop an equivalence between the diagonal slices and the perpendicular slices of 3D Young diagrams via Maya diagrams. Furthermore, we construct the fermion representation of quantum toroidal algebra on the 3D Young diagrams perpendicularly sliced.

  11. Face recognition based on matching of local features on 3D dynamic range sequences

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  12. Bodies adapt orientation-independent face representations

    PubMed Central

    Kessler, Ellyanna; Walls, Shawn A.; Ghuman, Avniel S.

    2013-01-01

    Faces and bodies share a great number of semantic attributes, such as gender, emotional expressiveness, and identity. Recent studies demonstrate that bodies can activate and modulate face perception. However, the nature of the face representation that is activated by bodies remains unknown. In particular, face and body representations have previously been shown to have a degree of orientation specificity. Here we use body-face adaptation aftereffects to test whether bodies activate face representations in an orientation-dependent manner. Specifically, we used a two-by-two design to examine the magnitude of the body-face aftereffect using upright and inverted body adaptors and upright and inverted face targets. All four conditions showed significant body-face adaptation. We found neither a main effect of body orientation nor an interaction between body and face orientation. There was a main effect of target face orientation, with inverted target faces showing larger aftereffects than upright target faces, consistent with traditional face-face adaptation. Taken together, these results suggest that bodies adapt and activate a relatively orientation-independent representation of faces. PMID:23874311

  13. Creating 3D realistic head: from two orthogonal photos to multiview face contents

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Lin, Qian; Tang, Feng; Tang, Liang; Lim, Sukhwan; Wang, Shengjin

    2011-03-01

    3D Head models have many applications, such as virtual conference, 3D web game, and so on. The existing several web-based face modeling solutions that can create a 3D face model from one or two user uploaded face images, are limited to generating the 3D model of only face region. The accuracy of such reconstruction is very limited for side views, as well as hair regions. The goal of our research is to develop a framework for reconstructing the realistic 3D human head based on two approximate orthogonal views. Our framework takes two images, and goes through segmentation, feature points detection, 3D bald head reconstruction, 3D hair reconstruction and texture mapping to create a 3D head model. The main contribution of the paper is that the processing steps are applies to both the face region as well as the hair region.

  14. Consistent representations of and conversions between 3D rotations

    NASA Astrophysics Data System (ADS)

    Rowenhorst, D.; Rollett, A. D.; Rohrer, G. S.; Groeber, M.; Jackson, M.; Konijnenberg, P. J.; De Graef, M.

    2015-12-01

    In materials science the orientation of a crystal lattice is described by means of a rotation relative to an external reference frame. A number of rotation representations are in use, including Euler angles, rotation matrices, unit quaternions, Rodrigues-Frank vectors and homochoric vectors. Each representation has distinct advantages and disadvantages with respect to the ease of use for calculations and data visualization. It is therefore convenient to be able to easily convert from one representation to another. However, historically, each representation has been implemented using a set of often tacit conventions; separate research groups would implement different sets of conventions, thereby making the comparison of methods and results difficult and confusing. This tutorial article aims to resolve these ambiguities and provide a consistent set of conventions and conversions between common rotational representations, complete with worked examples and a discussion of the trade-offs necessary to resolve all ambiguities. Additionally, an open source Fortran-90 library of conversion routines for the different representations is made available to the community.

  15. Description and recognition of faces from 3D data

    NASA Astrophysics Data System (ADS)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  16. Methodologies for digital 3D acquisition and representation of mosaics

    NASA Astrophysics Data System (ADS)

    Manferdini, Anna Maria; Cipriani, Luca; Kniffitz, Linda

    2011-07-01

    Despite the recent improvements and widespread of digital technologies and their applications in the field of Cultural Heritage, nowadays Museums and Institutions still aren't encouraged to adopt digital procedures as a standard practice to collect data upon the heritage they are called to preserve and promote. One of the main reasons for this lack can be singled out in the high costs connected with these procedures and with their increasing due to difficulties connected with digital survey of artifacts and artworks which present evident intrinsic complexities and peculiarities that cannot be reconnected to recurrences. The aim of this paper is to show the results of a research conducted in order to find the most suitable digital methodology and procedure to be adopted to collect geometric and radiometric data upon mosaics that can straightforward both the preservation of the consistency of information about its geometry and the management of huge amount of data. One of the most immediate application of digital 3d survey of mosaics is the substitution of plaster casts that are usually built to add the third dimension to pictorial or photographic surveys before restoration interventions in order to document their conservation conditions and ease reconstruction procedures. Moreover, digital 3d surveys of mosaics allow to reproduce restoration interventions in digital environment able to perform reliable preliminary evaluations; in addition, 3d reality-based models of mosaics can be used within digital catalogues or for digital exhibitions and reconstruction aims.

  17. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of

  18. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  19. Challenges Facing 3-D Audio Display Design for Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    The challenges facing successful multimedia presentation depend largely on the expectations of the designer and end user for a given application. Perceptual limitations in distance, elevation and azimuth sound source simulation differ significantly between headphone and cross-talk cancellation loudspeaker listening and therefore must be considered. Simulation of an environmental context is desirable but the quality depends on processing resources and lack of interaction with the host acoustical environment. While techniques such as data reduction of head-related transfer functions have been used widely to improve simulation fidelity, another approach involves determining thresholds for environmental acoustic events. Psychoacoustic studies relevant to this approach are reviewed in consideration of multimedia applications

  20. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  1. Introduction to the special section on 3D representation, compression, and rendering.

    PubMed

    Vetro, Anthony; Frossard, Pascal; Lee, Sanghoon; Mueller, Karsten; Ohm, Jens-Rainer; Sullivan, Gary

    2013-09-01

    A new set of three-dimensional (3D) data formats and associated compression technologies are emerging with the aim to achieve more flexible representation and higher compression of 3D and multiview video content. These new tools will facilitate the generation of multiview output (e.g., as needed for multiview auto-stereoscopic displays), provide richer immersive multimedia experiences, and allow new interactive applications. This special section includes a timely set of papers covering the most recent technical developments in this area with papers covering topics in the different aspects of 3D systems, from representation and compression algorithms to rendering techniques and quality assessment. This special section includes a good balance on topics that are of interest to academic, industrial, and standardization communities. We believe that this collection of papers represent the most recent advances in representation, compression, rendering, and quality assessment of 3D scenes.

  2. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  3. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  4. Geoinformation techniques for the 3D visualisation of historic buildings and representation of a building's pathology

    NASA Astrophysics Data System (ADS)

    Tsilimantou, Elisavet; Delegou, Ekaterini; Ioannidis, Charalabos; Moropoulou, Antonia

    2016-08-01

    In this paper, the documentation of an historic building registered as Cultural Heritage asset is presented. The aim of the survey is to create a 3D geometric representation of a historic building and in accordance with multidisciplinary study extract useful information regarding the extent of degradation, constructions' durability etc. For the implementation of the survey, a combination of different types of acquisition technologies is used. The project focuses on the study of Villa Klonaridi, in Athens, Greece. For the complete documentation of the building, conventional topography, photogrammetric and laser scanning techniques is combined. Close range photogrammetric techniques are used for the acquisition of the façades and architectural details. One of the main objectives is the development of an accurate 3D model, where the photorealistic representation of the building is achieved, along with the decay pathology, historical phases and architectural components. In order to achieve a suitable graphical representation for the study of the material and decay patterns beyond the 2D representation, 3D modelling and additional information modelling is performed for comparative analysis. The study provides various conclusions regarding the scale of deterioration obtained by the 2D and 3D analysis respectively. Considering the variation in material and decay patterns, comparative results are obtained regarding the degradation of the building. Overall, the paper describes a process performed on a Historic Building, where the 3D digital acquisition of the monuments' structure is realized with the combination of close range surveying and laser scanning methods.

  5. A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity.

    PubMed

    Al-Osaimi, Faisal R

    2016-02-01

    In this paper, a novel approach to local 3D surface matching representation suitable for a range of 3D vision applications is introduced. Local 3D surface patches around key points on the 3D surface are represented by 2D images such that the representing 2D images enjoy certain characteristics which positively impact the matching accuracy, robustness, and speed. First, the proposed representation is complete, in the sense, there is no information loss during their computation. Second, the 3DoF 2D representations are strictly invariant to all the 3DoF rotations. To optimally avail surface information, the sensitivity of the representations to surface information is adjustable. This also provides the proposed matching representation with the means to optimally adjust to a particular class of problems/applications or an acquisition technology. Each 2D matching representation is a sequence of adjustable integral kernels, where each kernel is efficiently computed from a triple of precise 3D curves (profiles) formed by intersecting three concentric spheres with the 3D surface. Robust techniques for sampling the profiles and establishing correspondences among them were devised. Based on the proposed matching representation, two techniques for the detection of key points were presented. The first is suitable for static images, while the second is suitable for 3D videos. The approach was tested on the face recognition grand challenge v2.0, the 3D twins expression challenge, and the Bosphorus data sets, and a superior face recognition performance was achieved. In addition, the proposed approach was used in object class recognition and tested on a Kinect data set.

  6. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    NASA Astrophysics Data System (ADS)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections

  7. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  8. Cp-curve, a Novel 3-D Graphical Representation of Proteins

    NASA Astrophysics Data System (ADS)

    Bai, Haihua; Li, Chun; Agula, Hasi; Jirimutu, Jirimutu; Wang, Jun; Xing, Lili

    2007-12-01

    Based on a five-letter model of the 20 amino acids, we propose a novel 3-D graphical representation of proteins. The method is illustrated on the mutant exon 1 of EDA gene of a Mongolian family with X-linked congenital anodontia/wavy hair.

  9. The impact of specular highlights on 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Christlein, Vincent; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis

    2013-05-01

    One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.

  10. Profile of students' comprehension of 3D molecule representation and its interconversion on chirality

    NASA Astrophysics Data System (ADS)

    Setyarini, M.; Liliasari, Kadarohman, Asep; Martoprawiro, Muhamad A.

    2016-02-01

    This study aims at describing (1) students' level comprehension; (2) factors causing difficulties to 3D comprehend molecule representation and its interconversion on chirality. Data was collected using multiple-choice test consisting of eight questions. The participants were required to give answers along with their reasoning. The test was developed based on the indicators of concept comprehension. The study was conducted to 161 college students enrolled in stereochemistry topic in the odd semester (2014/2015) from two LPTK (teacher training institutes) in Bandar Lampung and Gorontalo, and one public university in Bandung. The result indicates that college students' level of comprehension towards 3D molecule representations and its inter-conversion was 5% on high level, 22 % on the moderate level, and 73 % on the low level. The dominant factors identified as the cause of difficulties to comprehend 3D molecule representation and its interconversion were (i) the lack of spatial awareness, (ii) violation of absolute configuration determination rules, (iii) imprecise placement of observers, (iv) the lack of rotation operation, and (v) the lack of understanding of correlation between the representations. This study recommends that learning show more rigorous spatial awareness training tasks accompanied using dynamic visualization media of molecules associated. Also students learned using static molecular models can help them overcome their difficulties encountered.

  11. Robust Representations for Face Recognition: The Power of Averages

    ERIC Educational Resources Information Center

    Burton, A. Mike; Jenkins, Rob; Hancock, Peter J. B.; White, David

    2005-01-01

    We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal…

  12. Segmentation of Blood Vessels and 3D Representation of CMR Image

    NASA Astrophysics Data System (ADS)

    Jiji, G. W.

    2013-06-01

    Current cardiac magnetic resonance imaging (CMR) technology allows the determination of patient-individual coronary tree structure, detection of infarctions, and assessment of myocardial perfusion. The purpose of this work is to segment heart blood vessels and visualize it in 3D. In this work, 3D visualisation of vessel was performed into four phases. The first step is to detect the tubular structures using multiscale medialness function, which distinguishes tube-like structures from and other structures. Second step is to extract the centrelines of the tubes. From the centreline radius the cylindrical tube model is constructed. The third step is segmentation of the tubular structures. The cylindrical tube model is used in segmentation process. Fourth step is to 3D representation of the tubular structure using Volume . The proposed approach is applied to 10 datasets of patients from the clinical routine and tested the results with radiologists.

  13. Depth representation of moving 3-D objects in apparent-motion path.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2008-01-01

    Apparent motion is perceived when two objects are presented alternately at different positions. The internal representations of apparently moving objects are formed in an apparent-motion path which lacks physical inputs. We investigated the depth information contained in the representation of 3-D moving objects in an apparent-motion path. We examined how probe objects-briefly placed in the motion path-affected the perceived smoothness of apparent motion. The probe objects comprised 3-D objects which were defined by being shaded or by disparity (convex/concave) or 2-D (flat) objects, while the moving objects were convex/concave objects. We found that flat probe objects induced a significantly smoother motion perception than concave probe objects only in the case of the convex moving objects. However, convex probe objects did not lead to smoother motion as the flat objects did, although the convex probe objects contained the same depth information for the moving objects. Moreover, the difference between probe objects was reduced when the moving objects were concave. These counterintuitive results were consistent in conditions when both depth cues were used. The results suggest that internal representations contain incomplete depth information that is intermediate between that of 2-D and 3-D objects.

  14. Superpose3D: A Local Structural Comparison Program That Allows for User-Defined Structure Representations

    PubMed Central

    Gherardini, Pier Federico; Ausiello, Gabriele; Helmer-Citterich, Manuela

    2010-01-01

    Local structural comparison methods can be used to find structural similarities involving functional protein patches such as enzyme active sites and ligand binding sites. The outcome of such analyses is critically dependent on the representation used to describe the structure. Indeed different categories of functional sites may require the comparison program to focus on different characteristics of the protein residues. We have therefore developed superpose3D, a novel structural comparison software that lets users specify, with a powerful and flexible syntax, the structure description most suited to the requirements of their analysis. Input proteins are processed according to the user's directives and the program identifies sets of residues (or groups of atoms) that have a similar 3D position in the two structures. The advantages of using such a general purpose program are demonstrated with several examples. These test cases show that no single representation is appropriate for every analysis, hence the usefulness of having a flexible program that can be tailored to different needs. Moreover we also discuss how to interpret the results of a database screening where a known structural motif is searched against a large ensemble of structures. The software is written in C++ and is released under the open source GPL license. Superpose3D does not require any external library, runs on Linux, Mac OSX, Windows and is available at http://cbm.bio.uniroma2.it/superpose3D. PMID:20700534

  15. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  16. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  17. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  18. Representation of 3-D surface orientation by velocity and disparity gradient cues in area MT.

    PubMed

    Sanada, Takahisa M; Nguyenkim, Jerry D; Deangelis, Gregory C

    2012-04-01

    Neural coding of the three-dimensional (3-D) orientation of planar surface patches may be an important intermediate step in constructing representations of complex 3-D surface structure. Spatial gradients of binocular disparity, image velocity, and texture provide potent cues to the 3-D orientation (tilt and slant) of planar surfaces. Previous studies have described neurons in both dorsal and ventral stream areas that are selective for surface tilt based on one or more of these gradient cues. However, relatively little is known about whether single neurons provide consistent information about surface orientation from multiple gradient cues. Moreover, it is unclear how neural responses to combinations of surface orientation cues are related to responses to the individual cues. We measured responses of middle temporal (MT) neurons to random dot stimuli that simulated planar surfaces at a variety of tilts and slants. Four cue conditions were tested: disparity, velocity, and texture gradients alone, as well as all three gradient cues combined. Many neurons showed robust tuning for surface tilt based on disparity and velocity gradients, with relatively little selectivity for texture gradients. Some neurons showed consistent tilt preferences for disparity and velocity cues, whereas others showed large discrepancies. Responses to the combined stimulus were generally well described as a weighted linear sum of responses to the individual cues, even when disparity and velocity preferences were discrepant. These findings suggest that area MT contains a rudimentary representation of 3-D surface orientation based on multiple cues, with single neurons implementing a simple cue integration rule.

  19. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  20. Shape representation for efficient landmark-based segmentation in 3-d.

    PubMed

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2014-04-01

    In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times.

  1. Production of 3D consistent image representation of outdoor scenery for multimedia ambiance communication from multiviewpoint range data measured with a 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Imamura, Hiroshi; Sunaga, Shin-ichi; Komatsu, Takashi

    2002-03-01

    Toward future 3D image communication, we have started studying the Multimedia Ambiance Communication, a kind of shared-space communication, and adopted an approach to design the 3D-image space using actual images of outdoor scenery, by introducing the concept of the three-layer model of long-, mid- and short-range views. The long- and mid-range views do not require precise representation of their 3D structure, and hence we employ the setting representation like stage settings to approximate their 3D structure according to the slanting-plane-model. We deal with an approach to produce the consistent setting representation for describing long- and mid-range views from range and texture data measured with a laser scanner and a digital camera located at multiple viewpoints. The production of such a representation requires the development of several techniques: nonlinear smoothing of raw range data, plane segmentation of range data, registration of multi-viewpoint range data, integration of multi-viewpoint setting representations and texture mapping onto each setting plane. In this paper, we concentrate on the plane segmentation and the multi-viewpoint data registration. Our plane segmentation method is based on the concept of the region competition, and can precisely extract fitting planes from the range data. Our registration method uses the equations of the segmented planes corresponding between two different viewpoints to determine the 3D Euclidean transformation between them. A unifying consistent setting representation can be constructed by integrating multiple setting representations for multiple viewpoints.

  2. Comparison of low cost 3D structured light scanners for face modeling.

    PubMed

    Bakirman, Tolga; Gumusay, Mustafa Umit; Reis, Hatice Catal; Selbesoglu, Mahmut Oguz; Yosmaoglu, Serra; Yaras, Mehmet Cem; Seker, Dursun Zafer; Bayram, Bulent

    2017-02-01

    This study aims to compare three different structured light scanner systems to generate accurate 3D human face models. Among these systems, the most dense and expensive one was denoted as the reference and the other two that were low cost and low resolution were compared according to the reference system. One female face and one male face were scanned with three light scanner systems. Point-cloud filtering, mesh generation, and hole-filling steps were carried out using a trial version of commercial software; moreover, the data evaluation process was realized using CloudCompare open-source software. Various filtering and mesh smoothing levels were applied on reference data to compare with other low-cost systems. Thus, the optimum reduction level of reference data was evaluated to continue further processes. The outcome of the presented study shows that low-cost structured light scanners have a great potential for 3D object modeling, including the human face. A considerable cheap structured light system has been used due to its capacity to obtain spatial and morphological information in the case study of 3D human face modeling. This study also discusses the benefits and accuracy of low-cost structured light systems.

  3. Representation and coding of large-scale 3D dynamic maps

    NASA Astrophysics Data System (ADS)

    Cohen, Robert A.; Tian, Dong; Krivokuća, Maja; Sugimoto, Kazuo; Vetro, Anthony; Wakimoto, Koji; Sekiguchi, Shunichi

    2016-09-01

    combined with depth and color measurements of the surrounding environment. Localization could be achieved with GPS, inertial measurement units (IMU), cameras, or combinations of these and other devices, while the depth measurements could be achieved with time-of-flight, radar or laser scanning systems. The resulting 3D maps, which are composed of 3D point clouds with various attributes, could be used for a variety of applications, including finding your way around indoor spaces, navigating vehicles around a city, space planning, topographical surveying or public surveying of infrastructure and roads, augmented reality, immersive online experiences, and much more. This paper discusses application requirements related to the representation and coding of large-scale 3D dynamic maps. In particular, we address requirements related to different types of acquisition environments, scalability in terms of progressive transmission and efficiently rendering different levels of details, as well as key attributes to be included in the representation. Additionally, an overview of recently developed coding techniques is presented, including an assessment of current performance. Finally, technical challenges and needs for future standardization are discussed.

  4. Surveying, Modeling and 3d Representation of a wreck for Diving Purposes: Cargo Ship "vera"

    NASA Astrophysics Data System (ADS)

    Ktistis, A.; Tokmakidis, P.; Papadimitriou, K.

    2017-02-01

    This paper presents the results from an underwater recording of the stern part of a contemporary cargo-ship wreck. The aim of this survey was to create 3D representations of this wreck mainly for recreational diving purposes. The key points of this paper are: a) the implementation of the underwater recording at a diving site; b) the reconstruction of a 3d model from data that have been captured by recreational divers; and c) the development of a set of products to be used by the general public for the ex situ presentation or for the in situ navigation. The idea behind this project is to define a simple and low cost procedure for the surveying, modeling and 3D representation of a diving site. The perspective of our team is to repeat the proposed methodology for the documentation and the promotion of other diving sites with cultural features, as well as to train recreational divers in underwater surveying procedures towards public awareness and community engagement in the maritime heritage.

  5. 3D Face Generation Tool Candide for Better Face Matching in Surveillance Video

    DTIC Science & Technology

    2014-07-01

    watch-list screening, biometrics , reliability, performance evaluation Community of Practice: Biometrics and Identity Management Canada Safety and...below. • Dmitry Gorodnichy, Eric Granger “PROVE-IT(FRiV): framework and results”. Also pub- lished in Proceedings of NIST International Biometrics ...Granger, “Evaluation of Face Recognition for Video Surveillance”. Also published in Proceedings of NIST International Biometric Performance Conference

  6. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  7. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  8. Progressive Shape-Distribution-Encoder for Learning 3D Shape Representation.

    PubMed

    Xie, Jin; Zhu, Fan; Dai, Guoxian; Shao, Ling; Fang, Yi

    2017-03-01

    Since there are complex geometric variations with 3D shapes, extracting efficient 3D shape features is one of the most challenging tasks in shape matching and retrieval. In this paper, we propose a deep shape descriptor by learning shape distributions at different diffusion time via a progressive shape-distribution-encoder (PSDE). First, we develop a shape distribution representation with the kernel density estimator to characterize the intrinsic geometry structures of 3D shapes. Then, we propose to learn a deep shape feature through an unsupervised PSDE. Specially, the unsupervised PSDE aims at modeling the complex non-linear transform of the estimated shape distributions between consecutive diffusion time. In order to characterize the intrinsic structures of 3D shapes more efficiently, we stack multiple PSDEs to form a network structure. Finally, we concatenate all neurons in the middle hidden layers of the unsupervised PSDE network to form an unsupervised shape descriptor for retrieval. Furthermore, by imposing an additional constraint on the outputs of all hidden layers, we propose a supervised PSDE to form a supervised shape descriptor. For each hidden layer, the similarity between a pair of outputs from the same class is as large as possible and the similarity between a pair of outputs from different classes is as small as possible. The proposed method is evaluated on three benchmark 3D shape data sets with large geometric variations, i.e., McGill, SHREC'10 ShapeGoogle, and SHREC'14 Human data sets, and the experimental results demonstrate the superiority of the proposed method to the existing approaches.

  9. 3D planar representation of stereo depth images for 3DTV applications.

    PubMed

    Özkalaycı, Burak O; Alatan, A Aydın

    2014-12-01

    The depth modality of the multiview video plus depth (MVD) format is an active research area, whose main objective is to develop depth image based rendering friendly efficient compression methods. As a part of this research, a novel 3D planar-based depth representation is proposed. The planar approximation of multiple depth images are formulated as an energy-based co-segmentation problem by a Markov random field model. The energy terms of this problem are designed to mimic the rate-distortion tradeoff for a depth compression application. A novel algorithm is developed for practical utilization of the proposed planar approximations in stereo depth compression. The co-segmented regions are also represented as layered planar structures forming a novel single-reference MVD format. The ability of the proposed layered planar MVD representation in decoupling the texture and geometric distortions make it a promising approach. Proposed 3D planar depth compression approaches are compared against the state-of-the-art image/video coding standards by objective and visual evaluation and yielded competitive performance.

  10. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  11. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  12. Separating the Representation from the Science: Training Students in Comprehending 3D Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Silver, D.; Chiang, J.; Halpern, D.; Oh, K.; Tremaine, M.

    2011-12-01

    Studies of students taking first year geology and earth science courses at universities find that a remarkable number of them are confused by the three-dimensional representations used to explain the science [1]. Comprehension of these 3D representations has been found to be related to an individual's spatial ability [2]. A variety of interactive programs and animations have been created to help explain the diagrams to beginning students [3, 4]. This work has demonstrated comprehension improvement and removed a gender gap between male (high spatial) and female (low spatial) students [5]. However, not much research has examined what makes the 3D diagrams so hard to understand or attempted to build a theory for creating training designed to remove these difficulties. Our work has separated the science labeling and comprehension of the diagrams from the visualizations to examine how individuals mentally see the visualizations alone. In particular, we asked subjects to create a cross-sectional drawing of the internal structure of various 3D diagrams. We found that viewing planes (the coordinate system the designer applies to the diagram), cutting planes (the planes formed by the requested cross sections) and visual property planes (the planes formed by the prominent features of the diagram, e.g., a layer at an angle of 30 degrees to the top surface of the diagram) that deviated from a Cartesian coordinate system imposed by the viewer caused significant problems for subjects, in part because these deviations forced them to mentally re-orient their viewing perspective. Problems with deviations in all three types of plane were significantly harder than those deviating on one or two planes. Our results suggest training that does not focus on showing how the components of various 3D geologic formations are put together but rather training that guides students in re-orienting themselves to deviations that differ from their right-angle view of the world, e.g., by showing how

  13. Staining and embedding of human chromosomes for 3-d serial block-face scanning electron microscopy.

    PubMed

    Yusuf, Mohammed; Chen, Bo; Hashimoto, Teruo; Estandarte, Ana Katrina; Thompson, George; Robinson, Ian

    2014-12-01

    The high-order structure of human chromosomes is an important biological question that is still under investigation. Studies have been done on imaging human mitotic chromosomes using mostly 2-D microscopy methods. To image micron-sized human chromosomes in 3-D, we developed a procedure for preparing samples for serial block-face scanning electron microscopy (SBFSEM). Polyamine chromosomes are first separated using a simple filtration method and then stained with heavy metal. We show that the DNA-specific platinum blue provides higher contrast than osmium tetroxide. A two-step procedure for embedding chromosomes in resin is then used to concentrate the chromosome samples. After stacking the SBFSEM images, a familiar X-shaped chromosome was observed in 3-D.

  14. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  15. Three-dimensional recording of the human face with a 3D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Gühring, M; Baurecht, H; Papadopulos, N A; Schwenzer-Zimmerer, K; Sader, R; Biemer, E; Zeilhofer, H F

    2006-01-01

    Three-dimensional recording of the surface of the human body or of certain anatomical areas has gained an ever increasing importance in recent years. When recording living surfaces, such as the human face, not only has a varying degree of surface complexity to be accounted for, but also a variety of other factors, such as motion artefacts. It is of importance to establish standards for the recording procedure, which will optimise results and allow for better comparison and validation. In the study presented here, the faces of five male test persons were scanned in different experimental settings using non-contact 3D digitisers, type Minolta Vivid 910). Among others, the influence of the number of scanners used, the angle of recording, the head position of the test person, the impact of the examiner and of examination time on accuracy and precision of the virtual face models generated from the scanner data with specialised software were investigated. Computed data derived from the virtual models were compared to corresponding reference measurements carried out manually between defined landmarks on the test persons' faces. We describe experimental conditions that were of benefit in optimising the quality of scanner recording and the reliability of three-dimensional surface imaging. However, almost 50% of distances between landmarks derived from the virtual models deviated more than 2mm from the reference of manual measurements on the volunteers' faces.

  16. Familiar face + novel face = familiar face? Representational bias in the perception of morphed faces in chimpanzees

    PubMed Central

    Myowa-Yamakoshi, Masako

    2016-01-01

    Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee’s facial–recognition abilities. We presented eight subjects with three types of stimuli: (1) familiar faces, (2) novel faces and (3) intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity. PMID:27602275

  17. Using Facial Symmetry to Handle Pose Variations in Real-World 3D Face Recognition.

    PubMed

    Passalis, Georgios; Perakis, Panagiotis; Theoharis, Theoharis; Kakadiaris, Ioannis A

    2011-10-01

    The uncontrolled conditions of real-world biometric applications pose a great challenge to any face recognition approach. The unconstrained acquisition of data from uncooperative subjects may result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions, resulting in missing data. In this paper, a novel 3D face recognition method is proposed that uses facial symmetry to handle pose variations. It employs an automatic landmark detector that estimates pose and detects occluded areas for each facial scan. Subsequently, an Annotated Face Model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data. The result is a pose invariant geometry image. Unlike existing methods that require frontal scans, the proposed method performs comparisons among interpose scans using a wavelet-based biometric signature. It is suitable for real-world applications as it only requires half of the face to be visible to the sensor. The proposed method was evaluated using databases from the University of Notre Dame and the University of Houston that, to the best of our knowledge, include the most challenging pose variations publicly available. The average rank-one recognition rate of the proposed method in these databases was 83.7 percent.

  18. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  19. Depth-based representations: Which coding format for 3D video broadcast applications?

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan

    2011-03-01

    3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.

  20. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  1. Low dimensional representation of face space by face-selective inferior temporal neurons.

    PubMed

    Salehi, Sina; Dehaqani, Mohammad-Reza A; Esteky, Hossein

    2017-03-07

    Representation of visual objects in primate brain is distributed and multiple neurons are involved in encoding each object. One way to understand the neural basis of object representation is to estimate the number of neural dimensions that are needed for veridical representation of object categories. In this study, the characteristics of the match between physical-shape and neural representational spaces in monkey inferior temporal (IT) cortex have been evaluated. Specifically, we examined how the number of neural dimensions, stimulus behavioral saliency and stimulus category selectivity of neurons affect the correlation between shape and neural representational spaces in IT cortex. Single unit recordings from monkey IT revealed that there was a significant match between face space and its neural representation at lower neural dimensions while the optimal match for the non-face objects was observed at higher neural dimensions. There was a statistically significant match between the face and neural spaces only in the face selective neurons while a significant match was observed for non-face objects in all neurons regardless of their category selectivity. Interestingly, the face neurons showed higher match for the non-face objects than for the faces at higher neural dimensions. The optimal representation of face space in the responses of the face neurons was a low dimensional map that emerged early (~ 150ms post stimulus onset) and was followed by a high dimensional and relatively late (~300ms) map for the non-face stimuli. These results support a multiplexing function for the face neurons in the representation of highly similar shape spaces, but with different dimensionality and timing scales. This article is protected by copyright. All rights reserved.

  2. Identity from Variation: Representations of Faces Derived from Multiple Instances

    ERIC Educational Resources Information Center

    Burton, A. Mike; Kramer, Robin S. S.; Ritchie, Kay L.; Jenkins, Rob

    2016-01-01

    Research in face recognition has tended to focus on discriminating between individuals, or "telling people apart." It has recently become clear that it is also necessary to understand how images of the same person can vary, or "telling people together." Learning a new face, and tracking its representation as it changes from…

  3. Anticipatory Spatial Representation of 3D Regions Explored by Sighted Observers and a Deaf-and-Blind-Observer

    ERIC Educational Resources Information Center

    Intraub, Helene

    2004-01-01

    Viewers who study photographs of scenes tend to remember having seen beyond the boundaries of the view ["boundary extension"; J. Exp. Psychol. Learn. Mem. Cogn. 15 (1989) 179]. Is this a fundamental aspect of scene representation? Forty undergraduates explored bounded regions of six common (3D) scenes, visually or haptically (while blindfolded)…

  4. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  5. Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation

    PubMed Central

    Fuentes, Christina T.; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick

    2013-01-01

    Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness. PMID:24130790

  6. Deformable modeling using a 3D boundary representation with quadratic constraints on the branching structure of the Blum skeleton.

    PubMed

    Yushkevich, Paul A; Zhang, Hui Gary

    2013-01-01

    We propose a new approach for statistical shape analysis of 3D anatomical objects based on features extracted from skeletons. Like prior work on medial representations, the approach involves deforming a template to target shapes in a way that preserves the branching structure of the skeleton and provides intersubject correspondence. However, unlike medial representations, which parameterize the skeleton surfaces explicitly, our representation is boundary-centric, and the skeleton is implicit. Similar to prior constrained modeling methods developed 2D objects or tube-like 3D objects, we impose symmetry constraints on tuples of boundary points in a way that guarantees the preservation of the skeleton's topology under deformation. Once discretized, the problem of deforming a template to a target shape is formulated as a quadratically constrained quadratic programming problem. The new technique is evaluated in terms of its ability to capture the shape of the corpus callosum tract extracted from diffusion-weighted MRI.

  7. EEG evidence of face-specific visual self-representation.

    PubMed

    Miyakoshi, Makoto; Kanayama, Noriaki; Iidaka, Tetsuya; Ohira, Hideki

    2010-05-01

    Cognitive science has regarded an individual's face as a form of representative stimuli to engage self-representation. The domain-generality of self-representation has been assumed in several reports, but was recently refuted in a functional magnetic resonance imaging study (Sugiura et al., 2008). The general validity of this study's criticism should be tested by other measures to compensate for the limitation of the time resolution of the blood-oxygen-level-dependent (BOLD) signal. In this article, we report an EEG study on the domain-generality of visual self-representation. Domain-general self-representation was operationally defined as the self-relevance common to one's own Face and Cup; three levels of familiarity, Self, Familiar, and Unfamiliar, were prepared for each. There was another condition, Visual Field, that manipulated visual hemifield during stimulus presentation, but it was collapsed because it produced no interaction with stimulus familiarity. Our results confirmed comparable phase resetting in both domains in response to familiarity manipulation, which occurred within the medial frontal area during 270-390 ms poststimulus and in the theta band. However, self-specific dissociation was observed only for Face. The results here support the conclusion that visual self-representation is domain-specific and that the oscillatory responses observed suggest evidence of face-specific visual self-representation. Results also revealed an inter-trial phase coherency decrease specifically for Self-Face within the right fusiform area during 170-290 ms poststimulus and in the alpha and theta band, suggesting reduced functional demand for Self-Face represented by sharpened networks.

  8. Three dimensional surface analyses of pubic symphyseal faces of contemporary Japanese reconstructed with 3D digitized scanner.

    PubMed

    Biwasaka, Hitoshi; Sato, Kei; Aoki, Yasuhiro; Kato, Hideaki; Maeno, Yoshitaka; Tanijiri, Toyohisa; Fujita, Sachiko; Dewa, Koji

    2013-09-01

    Three dimensional pubic bone images were analyzed to quantify some age-dependent morphological changes of the symphyseal faces of contemporary Japanese residents. The images were synthesized from 145 bone specimens with 3D measuring device. Phases of Suchey-Brooks system were determined on the 3D pubic symphyseal images without discrepancy from those carried out on the real bones because of the high fidelity. Subsequently, mean curvatures of the pubic symphyseal faces to examine concavo-convex condition of the surfaces were analyzed on the 3D images. Average values of absolute mean curvatures of phase 1 and 2 groups were higher than those of phase 3-6 ones, whereas the values were approximately constant over phase 3 presumably reflecting the inactivation of pubic faces over phase 3. Ratio of the concave areas increased gradually with progressing phase or age classes, although convex areas were predominant in every phase.

  9. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  10. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    NASA Astrophysics Data System (ADS)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  11. Similar Representations of Emotions Across Faces and Voices.

    PubMed

    Kuhn, Lisa Katharina; Wydell, Taeko; Lavan, Nadine; McGettigan, Carolyn; Garrido, Lúcia

    2017-03-02

    Emotions are a vital component of social communication, carried across a range of modalities and via different perceptual signals such as specific muscle contractions in the face and in the upper respiratory system. Previous studies have found that emotion recognition impairments after brain damage depend on the modality of presentation: recognition from faces may be impaired whereas recognition from voices remains preserved, and vice versa. On the other hand, there is also evidence for shared neural activation during emotion processing in both modalities. In a behavioral study, we investigated whether there are shared representations in the recognition of emotions from faces and voices. We used a within-subjects design in which participants rated the intensity of facial expressions and nonverbal vocalizations for each of the 6 basic emotion labels. For each participant and each modality, we then computed a representation matrix with the intensity ratings of each emotion. These matrices allowed us to examine the patterns of confusions between emotions and to characterize the representations of emotions within each modality. We then compared the representations across modalities by computing the correlations of the representation matrices across faces and voices. We found highly correlated matrices across modalities, which suggest similar representations of emotions across faces and voices. We also showed that these results could not be explained by commonalities between low-level visual and acoustic properties of the stimuli. We thus propose that there are similar or shared coding mechanisms for emotions which may act independently of modality, despite their distinct perceptual inputs. (PsycINFO Database Record

  12. Multiple Representations-Based Face Sketch-Photo Synthesis.

    PubMed

    Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie

    2016-11-01

    Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.

  13. Adaptive optimal quantization for 3D mesh representation in the spherical coordinate system

    NASA Astrophysics Data System (ADS)

    Ahn, Jeong-Hwan; Ho, Yo-Sung

    1998-12-01

    In recent days, applications using 3D models are increasing. Since the 3D model contains a huge amount of information, compression of the 3D model data is necessary for efficient storage or transmission. In this paper, we propose an adaptive encoding scheme to compress the geometry information of the 3D model. Using the Levinson-Durbin algorithm, the encoder first predicts vertex positions along a vertex spanning tree. After each prediction error is normalized, the prediction error vector of each vertex point is represented in the spherical coordinate system (r,(theta) ,(phi) ). Each r is then quantizes by an optimal uniform quantizer. A pair of each ((theta) ,(phi) ) is also successively encoded by partitioning the surface of the sphere according to the quantized value of r. The proposed scheme demonstrates improved coding efficiency by exploiting the statistical properties of r and ((theta) ,(phi) ).

  14. Robust face representation using hybrid spatial feature interdependence matrix.

    PubMed

    Yao, Anbang; Yu, Shan

    2013-08-01

    A key issue in face recognition is to seek an effective descriptor for representing face appearance. In the context of considering the face image as a set of small facial regions, this paper presents a new face representation approach coined spatial feature interdependence matrix (SFIM). Unlike classical face descriptors which usually use a hierarchically organized or a sequentially concatenated structure to describe the spatial layout features extracted from local regions, SFIM is attributed to the exploitation of the underlying feature interdependences regarding local region pairs inside a class specific face. According to SFIM, the face image is projected onto an undirected connected graph in a manner that explicitly encodes feature interdependence-based relationships between local regions. We calculate the pair-wise interdependence strength as the weighted discrepancy between two feature sets extracted in a hybrid feature space fusing histograms of intensity, local binary pattern and oriented gradients. To achieve the goal of face recognition, our SFIM-based face descriptor is embedded in three different recognition frameworks, namely nearest neighbor search, subspace-based classification, and linear optimization-based classification. Extensive experimental results on four well-known face databases and comprehensive comparisons with the state-of-the-art results are provided to demonstrate the efficacy of the proposed SFIM-based descriptor.

  15. Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4

    NASA Astrophysics Data System (ADS)

    Gasore, J.; Prinn, R. G.

    2012-12-01

    The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a

  16. Face recognition using tridiagonal matrix enhanced multivariance products representation

    NASA Astrophysics Data System (ADS)

    Ã-zay, Evrim Korkmaz

    2017-01-01

    This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.

  17. Pose-invariant face-head identification using a bank of neural networks and the 3D neck reference point

    NASA Astrophysics Data System (ADS)

    Hild, Michael; Yoshida, Kazunobu; Hashimoto, Motonobu

    2003-03-01

    A method for recognizing faces in relativley unconstrained environments, such as offices, is described. It can recognize faces occurring over an extended range of orientations and distances relative to the camera. As the pattern recognition mechanism, a bank of small neural networks of the multilayer perceptron type is used, where each perceptron has the task of recognizing only a single person's face. The perceptrons are trained with a set of nine face images representing the nine main facial orientations of the person to be identified, and a set face images from various other persons. The center of the neck is determined as the reference point for face position unification. Geometric normalization and reference point determination utilizes 3-D data point measurements obtained with a stereo camera. The system achieves a recognition rate of about 95%.

  18. Cosine series representation of 3D curves and its application to white matter fiber bundles in diffusion tensor imaging

    PubMed Central

    Adluru, Nagesh; Lee, Jee Eun; Lazar, Mariana; Lainhart, Janet E.; Alexander, Andrew L.

    2011-01-01

    We present a novel cosine series representation for encoding fiber bundles consisting of multiple 3D curves. The coordinates of curves are parameterized as coefficients of cosine series expansion. We address the issue of registration, averaging and statistical inference on curves in a unified Hilbert space framework. Unlike traditional splines, the proposed method does not have internal knots and explicitly represents curves as a linear combination of cosine basis. This simplicity in the representation enables us to design statistical models, register curves and perform subsequent analysis in a more unified statistical framework than splines. The proposed representation is applied in characterizing abnormal shape of white matter fiber tracts passing through the splenium of the corpus callosum in autistic subjects. For an arbitrary tract, a 19 degree expansion is usually found to be sufficient to reconstruct the tract with 60 parameters. PMID:23316267

  19. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  20. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  1. Face recognition under variable illumination via sparse representation of patches

    NASA Astrophysics Data System (ADS)

    Fan, Shouke; Liu, Rui; Feng, Weiguo; Zhu, Ming

    2013-10-01

    The objective of this work is to recognize faces under variations in illumination. Previous works have indicated that the variations in illumination can dramatically reduce the performance of face recognition. To this end - ;an efficient method for face recognition which is robust under variable illumination is proposed in this paper. First of all, a discrete cosine transform(DCT) in the logarithm domain is employed to preprocess the images, removing the illumination variations by discarding an appropriate number of low-frequency DCT coefficients. Then, a face image is partitioned into several patches, and we classify the patches using Sparse Representation-based Classification, respectively. At last, the identity of a test image can be determined by the classification results of its patches. Experimental results on the Yale B database and the CMU PIE database show that excellent recognition rates can be achieved by the proposed method.

  2. The Representation of Cultural Heritage from Traditional Drawing to 3d Survey: the Case Study of Casamary's Abbey

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Saccone, M.

    2016-06-01

    In 3D survey the aspects most discussed in the scientific community are those related to the acquisition of data from integrated survey (laser scanner, photogrammetric, topographic and traditional direct), rather than those relating to the interpretation of the data. Yet in the methods of traditional representation, the data interpretation, such as that of the philological reconstruction, constitutes the most important aspect. It is therefore essential in modern systems of survey and representation, filter the information acquired. In the system, based on the integrated survey that we have adopted, the 3D object, characterized by a cloud of georeferenced points, defined but their color values, defines the core of the elaboration. It allows to carry out targeted analysis, using section planes as a tool of selection and filtering data, comparable with those of traditional drawings. In the case study of the Abbey of Casamari (Veroli), one of the most important Cistercian Settlement in Italy, the survey made for an Agreement with the Ministry of Cultural Heritage and Activities and Tourism (MiBACT) and University of RomaTre, within the project "Accessment of the sismic safety of the state museum", the reference 3D model, consisting of the superposition and geo-references data from various surveys, is the tool with which yo develop representative models comparable to traditional ones. It provides the necessary spatial environment for drawing up plans and sections with a definition such as to develop thematic analysis related to phases of construction, state of deterioration and structural features.

  3. Representation and visualization of variability in a 3D anatomical atlas using the kidney as an example

    NASA Astrophysics Data System (ADS)

    Hacker, Silke; Handels, Heinz

    2006-03-01

    Computer-based 3D atlases allow an interactive exploration of the human body. However, in most cases such 3D atlases are derived from one single individual, and therefore do not regard the variability of anatomical structures concerning their shape and size. Since the geometric variability across humans plays an important role in many medical applications, our goal is to develop a framework of an anatomical atlas for representation and visualization of the variability of selected anatomical structures. The basis of the project presented is the VOXEL-MAN atlas of inner organs that was created from the Visible Human data set. For modeling anatomical shapes and their variability we utilize "m-reps" which allow a compact representation of anatomical objects on the basis of their skeletons. As an example we used a statistical model of the kidney that is based on 48 different variants. With the integration of a shape description into the VOXEL-MAN atlas it is now possible to query and visualize different shape variations of an organ, e.g. by specifying a person's age or gender. In addition to the representation of individual shape variants, the average shape of a population can be displayed. Besides a surface representation, a volume-based representation of the kidney's shape variants is also possible. It results from the deformation of the reference kidney of the volume-based model using the m-rep shape description. In this way a realistic visualization of the shape variants becomes possible, as well as the visualization of the organ's internal structures.

  4. Dynamic shape modeling of the mitral valve from real-time 3D ultrasound images using continuous medial representation

    NASA Astrophysics Data System (ADS)

    Pouch, Alison M.; Yushkevich, Paul A.; Jackson, Benjamin M.; Gorman, Joseph H., III; Gorman, Robert C.; Sehgal, Chandra M.

    2012-03-01

    Purpose: Patient-specific shape analysis of the mitral valve from real-time 3D ultrasound (rt-3DUS) has broad application to the assessment and surgical treatment of mitral valve disease. Our goal is to demonstrate that continuous medial representation (cm-rep) is an accurate valve shape representation that can be used for statistical shape modeling over the cardiac cycle from rt-3DUS images. Methods: Transesophageal rt-3DUS data acquired from 15 subjects with a range of mitral valve pathology were analyzed. User-initialized segmentation with level sets and symmetric diffeomorphic normalization delineated the mitral leaflets at each time point in the rt-3DUS data series. A deformable cm-rep was fitted to each segmented image of the mitral leaflets in the time series, producing a 4D parametric representation of valve shape in a single cardiac cycle. Model fitting accuracy was evaluated by the Dice overlap, and shape interpolation and principal component analysis (PCA) of 4D valve shape were performed. Results: Of the 289 3D images analyzed, the average Dice overlap between each fitted cm-rep and its target segmentation was 0.880+/-0.018 (max=0.912, min=0.819). The results of PCA represented variability in valve morphology and localized leaflet thickness across subjects. Conclusion: Deformable medial modeling accurately captures valve geometry in rt-3DUS images over the entire cardiac cycle and enables statistical shape analysis of the mitral valve.

  5. Average Cross-Sectional Area of DebriSat Fragments Using Volumetrically Constructed 3D Representations

    NASA Technical Reports Server (NTRS)

    Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.

    2016-01-01

    Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for

  6. Is principal component analysis an effective tool to predict face attractiveness? A contribution based on real 3D faces of highly selected attractive women, scanned with stereophotogrammetry.

    PubMed

    Galantucci, Luigi Maria; Di Gioia, Eliana; Lavecchia, Fulvio; Percoco, Gianluca

    2014-05-01

    In the literature, several papers report studies on mathematical models used to describe facial features and to predict female facial beauty based on 3D human face data. Many authors have proposed the principal component analysis (PCA) method that permits modeling of the entire human face using a limited number of parameters. In some cases, these models have been correlated with beauty classifications, obtaining good attractiveness predictability using wrapped 2D or 3D models. To verify these results, in this paper, the authors conducted a three-dimensional digitization study of 66 very attractive female subjects using a computerized noninvasive tool known as 3D digital photogrammetry. The sample consisted of the 64 contestants of the final phase of the Miss Italy 2010 beauty contest, plus the two highest ranked contestants in the 2009 competition. PCA was conducted on this real faces sample to verify if there is a correlation between ranking and the principal components of the face models. There was no correlation and therefore, this hypothesis is not confirmed for our sample. Considering that the results of the contest are not only solely a function of facial attractiveness, but undoubtedly are significantly impacted by it, the authors based on their experience and real faces conclude that PCA analysis is not a valid prediction tool for attractiveness. The database of the features belonging to the sample analyzed are downloadable online and further contributions are welcome.

  7. A new-generation 3D ozone FACE (Free Air Controlled Exposure).

    PubMed

    Paoletti, Elena; Materassi, Alessandro; Fasano, Gianni; Hoshika, Yasutomo; Carriero, Giulia; Silaghi, Diana; Badea, Ovidiu

    2017-01-01

    To artificially simulate the impacts of ground-level ozone (O3) on vegetation, ozone FACE (Free Air Controlled Exposure) systems are increasingly recommended. We describe here a new-generation, three-dimensional ozone FACE, with O3 diffusion through laser-generated micro-holes, pre-mixing of air and O3, O3 generator with integral oxygen generator, continuous (day/night) exposure and full replication. Based on three O3 levels and assumptions on the pre-industrial O3 levels, we describe principles to calculate relative yield/biomass and estimate impacts even at lower-than-ambient O3 levels. The case study is called FO3X, and is at present the only ozone FACE in Mediterranean climate and one of the very few ozone FACEs investigating more than one stressor at a time. The results presented here will give further impulse to the research on O3 impacts on vegetation all over the world.

  8. Research on the Dynamic Problems of 3D Cross Coupling Quantum Harmonic Oscillator by Virtue of Intermediate Representation | x> λ, ν

    NASA Astrophysics Data System (ADS)

    Xu, Shi-Min; Xu, Xing-Lei; Li, Hong-Qi

    2008-06-01

    The intermediate representation (namely intermediate coordinate-momentum representation) | x> λ, ν are introduced and employed to research the expression of the operator tauhat{p}+σhat{x} in intermediate representation | x> λ, ν . The systematic Hamilton operator hat{H} of 3D cross coupling quantum harmonic oscillator was diagonalized by virtue of quadratic form theory. The quantity of λ, ν, τand σ were figured out. The dynamic problems of 3D cross coupling quantum harmonic oscillator are researched by virtue of intermediate representation. The energy eigen-value and eigenwave function of 3D cross coupling quantum harmonic oscillator were obtained in intermediate representation. The importance of intermediate representation was discussed. The results show that the Radon transformation of Wigner operator is just the projectional operator | x> λ, ν λ, ν < x|, and the Radon transformation of Wigner function is just a margin distribution.

  9. A 3D sequence-independent representation of the protein data bank.

    PubMed

    Fischer, D; Tsai, C J; Nussinov, R; Wolfson, H

    1995-10-01

    Here we address the following questions. How many structurally different entries are there in the Protein Data Bank (PDB)? How do the proteins populate the structural universe? To investigate these questions a structurally non-redundant set of representative entries was selected from the PDB. Construction of such a dataset is not trivial: (i) the considerable size of the PDB requires a large number of comparisons (there were more than 3250 structures of protein chains available in May 1994); (ii) the PDB is highly redundant, containing many structurally similar entries, not necessarily with significant sequence homology, and (iii) there is no clear-cut definition of structural similarity. The latter depend on the criteria and methods used. Here, we analyze structural similarity ignoring protein topology. To date, representative sets have been selected either by hand, by sequence comparison techniques which ignore the three-dimensional (3D) structures of the proteins or by using sequence comparisons followed by linear structural comparison (i.e. the topology, or the sequential order of the chains, is enforced in the structural comparison). Here we describe a 3D sequence-independent automated and efficient method to obtain a representative set of protein molecules from the PDB which contains all unique structures and which is structurally non-redundant. The method has two novel features. The first is the use of strictly structural criteria in the selection process without taking into account the sequence information. To this end we employ a fast structural comparison algorithm which requires on average approximately 2 s per pairwise comparison on a workstation. The second novel feature is the iterative application of a heuristic clustering algorithm that greatly reduces the number of comparisons required. We obtain a representative set of 220 chains with resolution better than 3.0 A, or 268 chains including lower resolution entries, NMR entries and models. The

  10. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  11. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  12. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  13. High Resolution Ultrasonic Method for 3D Fingerprint Representation in Biometrics

    NASA Astrophysics Data System (ADS)

    Maev, R. Gr.; Bakulin, E. Y.; Maeva, E. Y.; Severin, F. M.

    Biometrics is an important field which studies different possible ways of personal identification. Among a number of existing biometric techniques fingerprint recognition stands alone - because very large database of fingerprints has already been acquired. Also, fingerprints are an important evidence that can be collected at a crime scene. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. Ultrasonic method of fingerprint imaging was originally introduced over a decade as the mapping of the reflection coefficient at the interface between the finger and a covering plate and has shown very good reliability and free from imperfections of previous two methods. This work introduces a newer development of the ultrasonic fingerprint imaging, focusing on the imaging of the internal structures of fingerprints (including sweat pores) with raw acoustic resolution of about 500 dpi (0.05 mm) using a scanning acoustic microscope to obtain images and acoustic data in the form of 3D data array. C-scans from different depths inside the fingerprint area of fingers of several volunteers were obtained and showed good contrast of ridges-and-valleys patterns and practically exact correspondence to the standard ink-and-paper prints of the same areas. Important feature reveled on the acoustic images was the clear appearance of the sweat pores, which could provide additional means of identification.

  14. Numerical simulation of turbulent heat transfer past a backward-facing step: 2D/3D RANS versus IDDES solutions

    NASA Astrophysics Data System (ADS)

    Smirnov, E. M.; Smirnovsky, A. A.; Schur, N. A.; Zaitsev, D. K.; Smirnov, P. E.

    2016-09-01

    The contribution covers results of numerical study of air flow and heat transfer past a backward-facing step at the Reynolds number of 28,000. The numerical simulation was carried out under conditions of the experiments of Vogel&Eaton (1985), where nominally 2D fluid dynamics and heat transfer in a channel with expansion ratio of 1.25 was investigated. Two approaches were used for turbulence modelling. First, the Menter SST turbulence model was used to perform refined 2D and 3D RANS steady-state computations. The 3D analysis was undertaken to evaluate effects of boundary layers developing on the sidewalls of the experimental channel. Then, 3D time-dependent computations were carried out using the vortex-resolving IDDES method and applying the spanwise-periodicity conditions. Comparative computations were performed using an in-house finite-volume code SINF/Flag-S and the ANSYS Fluent. The codes produced practically identical RANS solutions, showing in particular a difference of 4% in the central-line peak Stanton number calculated in 2D and 3D cases. The IDDES results obtained with two codes are in a satisfactory agreement. Comparing with the experimental data, the IDDES produces the best agreement for the wall friction, whereas the RANS solutions show superiority in predictions of the local Stanton number distribution.

  15. 3D face recognition system using cylindrical hidden-layer neural network: spatial domain and its eigenspace domain

    NASA Astrophysics Data System (ADS)

    Kusumoputro, Benyamin; Pangabean, Martha Y.; Rachman, Leila F.

    2001-09-01

    In this paper, a 3-D face recognition system is developed using a modified neural network. This modified neural network is constructed by substituting each of neuron in its hidden layer of conventional multilayer perceptron with a circular-structure of neurons. This neural system is then called as cylindrical-structure of hidden layer neural network (CHL-NN). The neural system is then applied on a real 3-D face image database that consists of 5 Indonesian persons. The images are taken under four different expressions such as neutral, smile, laugh and free expression. The 2-D images is taken from the human face images by gradually changing visual points, which is done by successively varies the camera position from - 90 to +90 with an interval of 15 degree. The experimental result has shown that the average recognition rate of 60% could be achieved when we used the image in its spatial domain. Improvement of the system is then developed, by transforming the image in its spatial domain into its eigenspace domain. Karhunen Loeve transformation technique is used, and each image in the spatial domain is represented as a point in the eigenspace domain. Fisherface method is then utilized as a feature extraction on the eigenspace domain, and using the same database and experimental procedure, the recognition rate of the system could be increased into 84% in average.

  16. Possible use of small UAV to create high resolution 3D model of vertical rock faces

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Kerkovits, Krisztian

    2014-05-01

    One of the newest and mostly emerging acquisition technologies is the use of small unmanned aerial vehicles (UAVs) to photogrammetry and remote sensing. Several successful research project or industrial use can be found worldwide (mine investigation, precision agriculture, mapping etc.) but those surveys are focusing mainly on the survey of horizontal areas. In our research a mixed acquisition method was developed and tested to create a dense, 3D model about a columnar outcrop close to Kő-hegy (Pest County). Our primary goal was to create a model whereat the pattern of different layers is clearly visible and measurable, as well as to test the robustness of our idea. Our method uses a consumer grade camera to take digital photographs about the outcrop. A small, custom made tricopter was built to carry the camera above middle and top parts of the rock, the bottom part can be photographed only from several ground positions. During the field survey ground control points were installed and measured using a kinematic correction GPS. These latter data were used during the georeferencing of generated point cloud. Free online services built on Structure from Motion (SfM) algorithms and desktop software also were tested to generate the relative point cloud and for further processing and analysis.

  17. Exact asymptotic statistics of the n-edged face in a 3D Poisson-Voronoi tessellation

    NASA Astrophysics Data System (ADS)

    Hilhorst, H. J.

    2016-05-01

    This work considers the 3D Poisson-Voronoi tessellation. It investigates the joint probability distribution {πn}(L) for an arbitrarily selected cell face to be n-edged and for the distance between the seeds of the two adjacent cells to be equal to 2L. For this quantity an exact expression is derived, valid in the limit n\\to ∞ with n 1/6 L fixed. The leading order correction term is determined. Good agreement with earlier Monte Carlo data is obtained. The cell face is shown to be surrounded by a three-dimensional domain that is empty of seeds and is the union of n balls; it is pumpkin-shaped and analogous to the flower of the 2D Voronoi cell. For n\\to ∞ this domain tends towards a torus of equal major and minor radii. The radii scale as n 1/3, in agreement with earlier heuristic work. A detailed understanding is achieved of several other statistical properties of the n-edged cell face.

  18. 3D-front-face fluorescence spectroscopy and independent components analysis: A new way to monitor bread dough development.

    PubMed

    Garcia, Rebeca; Boussard, Aline; Rakotozafy, Lalatiana; Nicolas, Jacques; Potus, Jacques; Rutledge, Douglas N; Cordella, Christophe B Y

    2016-01-15

    Following bread dough development can be a hard task as no reliable method exists to give the optimal mixing time. Dough development is linked to the evolution of gluten proteins, carbohydrates and lipids which can result in modifications in the spectral properties of the various fluorophores naturally present in the system. In this paper, we propose to use 3-D-front-face-fluorescence (3D-FFF) spectroscopy in the 250-550nm domain to follow the dough development as influenced by formulation (addition or not of glucose, glucose oxidase and ferulic acid in the dough recipe) and mixing time (2, 4, 6 and 8min). In all the 32 dough samples as well as in flour, three regions of maximum fluorescence intensities have been observed at 320nm after excitation at 295nm (Region 1), at 420nm after excitation at 360nm (Region 2) and 450nm after excitation at 390nm (Region 3). The principal components analysis (PCA) of the evolution of these maxima shows that the formulations with and without ferulic acid are clearly separated since the presence of ferulic acid induces a decrease of fluorescence in Region 1 and an increase in Regions 2 and 3. In addition, a kinetic effect of the mixing time can be observed (decrease of fluorescence in the Regions 1 and 2) mainly in the absence of ferulic acid. The analysis of variance (ANOVA) on these maximum values statistically confirms these observations. Independent components analysis (ICA) is also applied to the complete 3-D-FFF spectra in order to extract interpretable signals from spectral data which reflect the complex contribution of several fluorophores as influenced by their environment. In all cases, 3 signals can be clearly separated matching the 3 regions of maximal fluorescence. The signals corresponding to regions 1 and 2 can be ascribed to proteins and ferulic acid respectively, whereas the fluorophores associated with the 3rd signal (corresponding to region 3) remain unidentified. Good correlations are obtained between the IC

  19. Evaluating the Effectiveness of Organic Chemistry Textbooks in Promoting Representational Fluency and Understanding of 2D-3D Diagrammatic Relationships

    ERIC Educational Resources Information Center

    Kumi, Bryna C.; Olimpo, Jeffrey T.; Bartlett, Felicia; Dixon, Bonnie L.

    2013-01-01

    The use of two-dimensional (2D) representations to communicate and reason about micromolecular phenomena is common practice in chemistry. While experts are adept at using such representations, research suggests that novices often exhibit great difficulty in understanding, manipulating, and translating between various representational forms. When…

  20. Comparison of 3D representations depicting micro folds: overlapping imagery vs. time-of-flight laser scanner

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristidis D.; Georgopoulos, Andreas; Lozios, Stylianos G.

    2012-10-01

    A relatively new field of interest, which continuously gains grounds nowadays, is digital 3D modeling. However, the methodologies, the accuracy and the time and effort required to produce a high quality 3D model have been changing drastically the last few years. Whereas in the early days of digital 3D modeling, 3D models were only accessible to computer experts in animation, working many hours in expensive sophisticated software, today 3D modeling has become reasonably fast and convenient. On top of that, with online 3D modeling software, such as 123D Catch, nearly everyone can produce 3D models with minimum effort and at no cost. The only requirement is panoramic overlapping images, of the (still) objects the user wishes to model. This approach however, has limitations in the accuracy of the model. An objective of the study is to examine these limitations by assessing the accuracy of this 3D modeling methodology, with a Terrestrial Laser Scanner (TLS). Therefore, the scope of this study is to present and compare 3D models, produced with two different methods: 1) Traditional TLS method with the instrument ScanStation 2 by Leica and 2) Panoramic overlapping images obtained with DSLR camera and processed with 123D Catch free software. The main objective of the study is to evaluate advantages and disadvantages of the two 3D model producing methodologies. The area represented with the 3D models, features multi-scale folding in a cipollino marble formation. The most interesting part and most challenging to capture accurately, is an outcrop which includes vertically orientated micro folds. These micro folds have dimensions of a few centimeters while a relatively strong relief is evident between them (perhaps due to different material composition). The area of interest is located in Mt. Hymittos, Greece.

  1. The Role of Familiarity for Representations in Norm-Based Face Space

    PubMed Central

    Faerber, Stella J.; Kaufmann, Jürgen M.; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R.

    2016-01-01

    According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations. PMID:27168323

  2. Children's Face Identity Representations Are No More View Specific than Those of Adults

    ERIC Educational Resources Information Center

    Jeffery, Linda; Rathbone, Cameron; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Face recognition performance improves during childhood, not reaching adult levels until late adolescence, yet the source of this improvement is unclear. Recognition of faces across changes in viewpoint appears particularly slow to develop. Poor cross-view recognition suggests that children's face representations may be more view specific than…

  3. Social and emotional attachment in the neural representation of faces.

    PubMed

    Gobbini, M Ida; Leibenluft, Ellen; Santiago, Neil; Haxby, James V

    2004-08-01

    To dissociate the role of visual familiarity from the role of social and emotional factors in recognizing familiar individuals, we measured neural activity using functional magnetic resonance imaging (fMRI) while subjects viewed (1) faces of personally familiar individuals (i.e. friends and family), (2) faces of famous individuals, and (3) faces of strangers. Personally familiar faces evoked a stronger response than did famous familiar faces and unfamiliar faces in areas that have been associated with 'theory of mind', and a weaker response in the amygdala. These response modulations may reflect the spontaneous activation of social knowledge about the personality and attitudes of close friends and relatives and the less guarded attitude one has around these people. These results suggest that familiarity causes changes in neural response that extend beyond a visual memory for a face.

  4. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  5. Segmentation of Textures Defined on Flat vs. Layered Surfaces using Neural Networks: Comparison of 2D vs. 3D Representations.

    PubMed

    Oh, Sejong; Choe, Yoonsuck

    2007-08-01

    Texture boundary detection (or segmentation) is an important capability in human vision. Usually, texture segmentation is viewed as a 2D problem, as the definition of the problem itself assumes a 2D substrate. However, an interesting hypothesis emerges when we ask a question regarding the nature of textures: What are textures, and why did the ability to discriminate texture evolve or develop? A possible answer to this question is that textures naturally define physically distinct (i.e., occluded) surfaces. Hence, we can hypothesize that 2D texture segmentation may be an outgrowth of the ability to discriminate surfaces in 3D. In this paper, we conducted computational experiments with artificial neural networks to investigate the relative difficulty of learning to segment textures defined on flat 2D surfaces vs. those in 3D configurations where the boundaries are defined by occluding surfaces and their change over time due to the observer's motion. It turns out that learning is faster and more accurate in 3D, very much in line with our expectation. Furthermore, our results showed that the neural network's learned ability to segment texture in 3D transfers well into 2D texture segmentation, bolstering our initial hypothesis, and providing insights on the possible developmental origin of 2D texture segmentation function in human vision.

  6. Face sketch synthesis via sparse representation-based greedy search.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang

    2015-08-01

    Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics.

  7. The red face: art, history and medical representations.

    PubMed

    Cribier, B

    2011-11-01

    For millennia, a red face has been a handicap in social relations, mainly because of the associated bias against alcoholics. The color red is also the color of emotion, betrayal of the person who blushes. Since the color red is one of the main characteristics of rosacea, it contributes to the bad reputation this disorder has, which is therefore the subject of a pressing therapeutic demand, principally in women. Nineteenth-century French novelists such as Balzac and later Proust, admirably described blotchy, red, or sanguine faces, which always announced a difficult, violent temperament, or was simply the mark of the laboring class. The color red remains ambivalent today, on the one hand denoting blood and life and on the other suffering, shame, and death. The history of dermatology shows that the semiology of rosacea was very well described in the earliest reports, notably those written in the Middle Ages. The term "acne rosacea" appeared in Bateman's writings, who made it a clinical form of acne. This confusion lasted throughout the nineteenth century. It was not until Hebra in Austria and Darier in France that the differential diagnosis was clearly made between acne and rosacea. A "couperosis" previously referred to the entire range of the disease, particularly the papules and pustules, and it was not until the twentieth century that the current meaning of rosacea progressively gained ground: this term today designates facial telangiectasia, whether or not it is associated with a characteristic redness. Rosacea is a conspicuous disease, since the lesions involve the central portion of the face.Among the many manifestations of rosacea, redness is the most characteristic [1].

  8. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.

  9. [The red face: art, history and medical representations].

    PubMed

    Cribier, B

    2011-09-01

    For millennia, a red face has been a handicap in social relations, mainly because of the associated bias against alcoholics. The color red is also the color of emotion, betrayal of the person who blushes. Since the color red is one of the main characteristics of rosacea, it contributes to the bad reputation this disorder has, which is therefore the subject of a pressing therapeutic demand, principally in women. Nineteenth-century French novelists such as Balzac and later Proust, admirably described blotchy, red, or sanguine faces, which always announced a difficult, violent temperament, or was simply the mark of the laboring class. The color red remains ambivalent today, on the one hand denoting blood and life and on the other suffering, shame, and death. The history of dermatology shows that the semiology of rosacea was very well described in the earliest reports, notably those written in the Middle Ages. The term "acne rosacea" appeared in Bateman's writings, who made it a clinical form of acne. This confusion lasted throughout the nineteenth century. It was not until Hebra in Austria and Darier in France that the differential diagnosis was clearly made between acne and rosacea. A "couperosis" previously referred to the entire range of the disease, particularly the papules and pustules, and it was not until the twentieth century that the current meaning of rosacea progressively gained ground: this term today designates facial telangiectasia, whether or not it is associated with a characteristic redness.

  10. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  11. Representation of protein 3D structures in spherical (ρ, ϕ, θ) coordinates and two of its potential applications.

    PubMed

    Reyes, Vicente M

    2011-09-01

    Three-dimensional objects can be represented using cartesian, spherical or cylindrical coordinate systems, among many others. Currently all protein 3D structures in the PDB are in cartesian coordinates. We wanted to explore the possibility that protein 3D structures, especially the globular type (spheroproteins), when represented in spherical coordinates might find useful novel applications. A Fortran program was written to transform protein 3D structure files in cartesian coordinates (x,y,z) to spherical coordinates (ρ, ϕ, θ), with the centroid of the protein molecule as origin. We present here two applications, namely, (1) separation of the protein outer layer (OL) from the inner core (IC); and (2) identifying protrusions and invaginations on the protein surface. In the first application, ϕ and θ were partitioned into suitable intervals and the point with maximum ρ in each such 'ϕ-θ bin' was determined. A suitable cutoff value for ρ is adopted, and for each ϕ-θ bin, all points with ρ values less than the cutoff are considered part of the IC, and those with ρ values equal to or greater than the cutoff are considered part of the OL. We show that this separation procedure is successful as it gives rise to an OL that is significantly more enriched in hydrophilic amino acid residues, and an IC that is significantly more enriched in hydrophobic amino acid residues, as expected. In the second application, the point with maximum ρ in each ϕ-θ bin are sequestered and their frequency distribution constructed (i.e., maximum ρ's sorted from lowest to highest, collected into 1.50Å-intervals, and the frequency in each interval plotted). We show in such plots that invaginations on the protein surface give rise to subpeaks or shoulders on the lagging side of the main peak, while protrusions give rise to similar subpeaks or shoulders, but on the leading side of the main peak. We used the dataset of Laskowski et al. (1996) to demonstrate both applications.

  12. The influence of social comparison on visual representation of one's face.

    PubMed

    Zell, Ethan; Balcetis, Emily

    2012-01-01

    Can the effects of social comparison extend beyond explicit evaluation to visual self-representation--a perceptual stimulus that is objectively verifiable, unambiguous, and frequently updated? We morphed images of participants' faces with attractive and unattractive references. With access to a mirror, participants selected the morphed image they perceived as depicting their face. Participants who engaged in upward comparison with relevant attractive targets selected a less attractive morph compared to participants exposed to control images (Study 1). After downward comparison with relevant unattractive targets compared to control images, participants selected a more attractive morph (Study 2). Biased representations were not the products of cognitive accessibility of beauty constructs; comparisons did not influence representations of strangers' faces (Study 3). We discuss implications for vision, social comparison, and body image.

  13. Representations of faces and body parts in macaque temporal cortex: a functional MRI study.

    PubMed

    Pinsk, Mark A; DeSimone, Kevin; Moore, Tirin; Gross, Charles G; Kastner, Sabine

    2005-05-10

    Human neuroimaging studies suggest that areas in temporal cortex respond preferentially to certain biologically relevant stimulus categories such as faces and bodies. Single-cell studies in monkeys have reported cells in inferior temporal cortex that respond selectively to faces, hands, and bodies but provide little evidence of large clusters of category-specific cells that would form "areas." We probed the category selectivity of macaque temporal cortex for representations of monkey faces and monkey body parts relative to man-made objects using functional MRI in animals trained to fixate. Two face-selective areas were activated bilaterally in the posterior and anterior superior temporal sulcus exhibiting different degrees of category selectivity. The posterior face area was more extensively activated in the right hemisphere than in the left hemisphere. Immediately adjacent to the face areas, regions were activated bilaterally responding preferentially to body parts. Our findings suggest a category-selective organization for faces and body parts in macaque temporal cortex.

  14. Identity-Specific Face Adaptation Effects: Evidence for Abstractive Face Representations

    ERIC Educational Resources Information Center

    Hole, Graham

    2011-01-01

    The effects of selective adaptation on familiar face perception were examined. After prolonged exposure to photographs of a celebrity, participants saw a series of ambiguous morphs that were varying mixtures between the face of that person and a different celebrity. Participants judged fewer of the morphs to resemble the celebrity to which they…

  15. Social categories shape the neural representation of emotion: evidence from a visual face adaptation task.

    PubMed

    Otten, Marte; Banaji, Mahzarin R

    2012-01-01

    A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race) are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion). Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race (DR) trials than same race (SR) trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation.

  16. Face recognition by applying wavelet subband representation and kernel associative memory.

    PubMed

    Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam

    2004-01-01

    In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.

  17. A Computational Shape-based Model of Anger and Sadness Justifies a Configural Representation of Faces

    PubMed Central

    Neth, Donald; Martinez, Aleix M.

    2010-01-01

    Research suggests that configural cues (second-order relations) play a major role in the representation and classification of face images; making faces a “special” class of objects, since object recognition seems to use different encoding mechanisms. It is less clear, however, how this representation emerges and whether this representation is also used in the recognition of facial expressions of emotion. In this paper, we show how configural cues emerge naturally from a classical analysis of shape in the recognition of anger and sadness. In particular our results suggest that at least two of the dimensions of the computational (cognitive) space of facial expressions of emotion correspond to pure configural changes. The first of these dimensions measures the distance between the eyebrows and the mouth, while the second is concerned with the height-width ratio of the face. Under this proposed model, becoming a face “expert” would mean to move from the generic shape representation to that based on configural cues. These results suggest that the recognition of facial expressions of emotion shares this expertise property with the other processes of face processing. PMID:20510267

  18. The Anterior Temporal Face Area Contains Invariant Representations of Face Identity That Can Persist Despite the Loss of Right FFA and OFA.

    PubMed

    Yang, Hua; Susilo, Tirta; Duchaine, Bradley

    2016-03-01

    Macaque neurophysiology found image-invariant representations of face identity in a face-selective patch in anterior temporal cortex. A face-selective area in human anterior temporal lobe (fATL) has been reported, but has not been reliably identified, and its function and relationship with posterior face areas is poorly understood. Here, we used fMRI adaptation and neuropsychology to ask whether fATL contains image-invariant representations of face identity, and if so, whether these representations require normal functioning of fusiform face area (FFA) and occipital face area (OFA). We first used a dynamic localizer to demonstrate that 14 of 16 normal subjects exhibit a highly selective right fATL. Next, we found evidence that this area subserves image-invariant representation of identity: Right fATL showed repetition suppression to the same identity across different images, while other areas did not. Finally, to examine fATL's relationship with posterior areas, we used the same procedures with Galen, an acquired prosopagnosic who lost right FFA and OFA. Despite the absence of posterior face areas, Galen's right fATL preserved its face selectivity and showed repetition suppression comparable to that in controls. Our findings suggest that right fATL contains image-invariant face representations that can persist despite the absence of right FFA and OFA, but these representations are not sufficient for normal face recognition.

  19. The appropriateness of the helical axis technique and six available cardan sequences for the representation of 3-d lead leg kinematics during the fencing lunge.

    PubMed

    Sinclair, Jonathan; Taylor, Paul J; Bottoms, Lindsay

    2013-01-01

    Cardan/Euler angles represent the most common technique for the quantification of segmental rotations. Cardan angles are influenced by their ordered sequence, and sensitive to planar-cross talk from the dominant rotation plane, which may affect the angular parameters. The International Society of Biomechanics (ISB) currently recommends a sagittal, coronal, and then transverse (XYZ) ordered sequence, although it has been proposed that when quantifying non-sagittal rotations this may not be the most appropriate technique. This study examined the influence of the helical and six available Cardan sequences on lower extremity three-dimensional (3-D) kinematics of the lead leg during the fencing lunge. Kinematic data were obtained using a 3-D motion capture system as participants completed simulated lunges. Repeated measures ANOVAs were used to compare discrete kinematic parameters, and intraclass correlations were also utilized to determine evidence of planar crosstalk. The results indicate that in all three planes of rotation, peak angle and range of motion angles using the YXZ and ZXY sequences were significantly greater than the other sequences. It was also noted that the utilization of the YXZ and ZXY sequences was associated with the strongest correlations from the sagittal plane, and the XYZ sequence was found habitually to be associated with the lowest correlations. It appears that for accurate representation of 3-D kinematics of the lead leg during the fencing lunge, the XYZ sequence is the most appropriate and as such its continued utilization is encouraged.

  20. The Appropriateness of the Helical Axis Technique and Six Available Cardan Sequences for the Representation of 3-D Lead Leg Kinematics During the Fencing Lunge

    PubMed Central

    Sinclair, Jonathan; Taylor, Paul J; Bottoms, Lindsay

    Cardan/Euler angles represent the most common technique for the quantification of segmental rotations. Cardan angles are influenced by their ordered sequence, and sensitive to planar-cross talk from the dominant rotation plane, which may affect the angular parameters. The International Society of Biomechanics (ISB) currently recommends a sagittal, coronal, and then transverse (XYZ) ordered sequence, although it has been proposed that when quantifying non-sagittal rotations this may not be the most appropriate technique. This study examined the influence of the helical and six available Cardan sequences on lower extremity three-dimensional (3-D) kinematics of the lead leg during the fencing lunge. Kinematic data were obtained using a 3-D motion capture system as participants completed simulated lunges. Repeated measures ANOVAs were used to compare discrete kinematic parameters, and intraclass correlations were also utilized to determine evidence of planar crosstalk. The results indicate that in all three planes of rotation, peak angle and range of motion angles using the YXZ and ZXY sequences were significantly greater than the other sequences. It was also noted that the utilization of the YXZ and ZXY sequences was associated with the strongest correlations from the sagittal plane, and the XYZ sequence was found habitually to be associated with the lowest correlations. It appears that for accurate representation of 3-D kinematics of the lead leg during the fencing lunge, the XYZ sequence is the most appropriate and as such its continued utilization is encouraged. PMID:24146700

  1. A 3D map of the hindlimb motor representation in the lumbar spinal cord in Sprague Dawley rats

    NASA Astrophysics Data System (ADS)

    Borrell, Jordan A.; Frost, Shawn B.; Peterson, Jeremy; Nudo, Randolph J.

    2017-02-01

    Objective. Spinal cord injury (SCI) is a devastating neurological trauma with a prevalence of about 282 000 people living with an SCI in the United States in 2016. Advances in neuromodulatory devices hold promise for restoring function by incorporating the delivery of electrical current directly into the spinal cord grey matter via intraspinal microstimulation (ISMS). In such designs, detailed topographic maps of spinal cord outputs are needed to determine ISMS locations for eliciting hindlimb movements. The primary goal of the present study was to derive a topographic map of functional motor outputs in the lumbar spinal cord to hindlimb skeletal muscles as defined by ISMS in a rat model. Approach. Experiments were carried out in nine healthy, adult, male, Sprague Dawley rats. After a laminectomy of the T13-L1 vertebrae and removal of the dura mater, a four-shank, 16-channel microelectrode array was inserted along a 3D (200 µm) stimulation grid. Trains of three biphasic current pulses were used to determine evoked movements and electromyographic (EMG) activity. Via fine wire EMG electrodes, stimulus-triggered averaging (StTA) was used on rectified EMG data to determine response latency. Main results. Hindlimb movements were elicited at a median current intensity of 6 µA, and thresholds were significantly lower in ventrolateral sites. Movements typically consisted of whole leg, hip, knee, ankle, toe, and trunk movements. Hip movements dominated rostral to the T13 vertebral segment, knee movements were evoked at the T13-L1 vertebral junction, while ankle and digit movements were found near the rostral L1 vertebra. Whole leg movements spanned the entire rostrocaudal region explored, while trunk movements dominated medially. StTAs of EMG activity demonstrated a latency of ~4 ms. Significance. The derived motor map provides insight into the parameters needed for future neuromodulatory devices.

  2. Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers

    NASA Astrophysics Data System (ADS)

    Sandbach, S. D.; Lane, S. N.; Hardy, R. J.; Amsler, M. L.; Ashworth, P. J.; Best, J. L.; Nicholas, A. P.; Orfeo, O.; Parsons, D. R.; Reesink, A. J. H.; Szupiany, R. N.

    2012-12-01

    Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh- or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These "subgrid" elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to "unmeasured" topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers.

  3. Stochastic Representation and Uncertainty Assessment of a Deep Geothermal Reservoir Using Cross-Borehole ERT: A 3D Synthetic Case

    NASA Astrophysics Data System (ADS)

    Brunet, P.; Gloaguen, E.

    2014-12-01

    Designing and monitoring of geothermal systems is a complex task which requires a multidisciplinary approach. Deep geothermal reservoir models are prone to greater uncertainty, with a lack of direct data and lower resolution of surface geophysical methods. However, recent technical advances have enabled the potential use of permanent downhole vertical resistivity arrays for monitoring fluid injection. As electrical resistivity is sensitive to temperature changes, such data could provide valuable information for deep geothermal reservoir characterization. The objective of this study is to assess the potential of time-lapse cross-borehole ERT to constrain 3D realizations of geothermal reservoir properties. The synthetic case of a permeable geothermal reservoir in a sedimentary basin was set up, as a confined deep and saline sandstone aquifer with intermediate reservoir temperatures (150ºC), depth (1 km) and 30m thickness. The reservoir permeability distribution is heterogeneous, as the result of a fluvial depositional environment. The ERT monitoring system design is a triangular arrangement of 3 wells at 150 m spacing, including 1 injection and 1 extraction well. The optimal number and spacing of electrodes of the ERT array design is site-specific and has been assessed through a sensibility study. Dipole-dipole and pole-pole electrode configurations were used. The study workflow was the following: 1) Generation of a reference reservoir model and 100 stochastic realizations of permeability; 2) Simulation of saturated single-phase flow and heat transport of reinjection of cooled formation fluid (50ºC) with TOUGH2 software; 3) Time-lapse forward ERT modeling on the reference model and all realizations (observed and simulated apparent resistivity change); 4) heuristic optimization on ERT computed and calculated data. Preliminary results show significant reduction of parameter uncertainty, hence realization space, with assimilation of cross-borehole ERT data. Loss in

  4. Exploring Children's Face-Space: A Multidimensional Scaling Analysis of the Mental Representation of Facial Identity

    ERIC Educational Resources Information Center

    Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing

    2009-01-01

    We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces…

  5. Forgetting the Once-Seen Face: Estimating the Strength of an Eyewitness's Memory Representation

    ERIC Educational Resources Information Center

    Deffenbacher, Kenneth A.; Bornstein, Brian H.; McGorty, E. Kiernan; Penrod, Steven D.

    2008-01-01

    The fidelity of an eyewitness's memory representation is an issue of paramount forensic concern. Psychological science has been unable to offer more than vague generalities concerning the relation of retention interval to memory trace strength for the once-seen face. A meta-analysis of 53 facial memory studies produced a highly reliable…

  6. Study on local Gabor binary patterns for face representation and recognition

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    More recently, Local Binary Patterns(LBP) has received much attention in face representation and recognition. The original LBP operator could describe the spatial structure information, which are the variety edge or variety angle features of local facial images essentially, they are important factors of classify different faces. But the scale and orientation of the edge features include more detail information which could be used to classify different persons efficiently, while original LBP operator could not to extract the information. In this paper, based on the introduction of original LBP-based facial representation and recognition, the histogram sequences of local Gabor binary patterns are used to representation facial image. Principal Component Analysis (PCA) method is used to classification the histogram sequences, which have been converted to vectors. Recognition experimental results show that the method we used in this paper increases nearly 6% than the classification performance of original LBP operator.

  7. The representation of information about faces in the temporal and frontal lobes.

    PubMed

    Rolls, Edmund T

    2007-01-07

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximising the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalisation and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses, generalise to other views of the same face. A theory is described of how such invariant representations may be produced in a hierarchically organised set of visual cortical areas with convergent connectivity. The theory proposes that neurons in these visual areas use a modified Hebb synaptic modification rule with a short-term memory trace to capture whatever can be captured at each stage that is invariant about objects as the objects change in retinal view, position, size and rotation. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye gaze, face view and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found

  8. Modelling the impact of the light regime on single tree transpiration based on 3D representations of plant architecture

    NASA Astrophysics Data System (ADS)

    Bittner, S.; Priesack, E.

    2012-04-01

    We apply a functional-structural model of tree water flow to single old-growth trees in a temperate broad-leaved forest stand. Roots, stems and branches are represented by connected porous cylinder elements further divided into the inner heartwood cylinders surrounded by xylem and phloem. Xylem water flow is simulated by applying a non-linear Darcy flow in porous media driven by the water potential gradient according to the cohesion-tension theory. The flow model is based on physiological input parameters such as the hydraulic conductivity, stomatal response to leaf water potential and root water uptake capability and, thus, can reflect the different properties of tree species. The actual root water uptake is calculated using also a non-linear Darcy law based on the gradient between root xylem water potential and rhizosphere soil water potential and by the simulation of soil water flow applying Richards equation. A leaf stomatal conductance model is combined with the hydrological tree and soil water flow model and a spatially explicit three-dimensional canopy light model. The structure of the canopy and the tree architectures are derived by applying an automatic tree skeleton extraction algorithm from point clouds obtained by use of a terrestrial laser scanner allowing an explicit representation of the water flow path in the stem and branches. The high spatial resolution of the root and branch geometry and their connectivity makes the detailed modelling of the water use of single trees possible and allows for the analysis of the interaction between single trees and the influence of the canopy light regime (including different fractions of direct sunlight and diffuse skylight) on the simulated sap flow and transpiration. The model can be applied at various sites and to different tree species, enabling the up-scaling of the water usage of single trees to the total transpiration of mixed stands. Examples are given to reveal differences between diffuse- and ring

  9. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    NASA Astrophysics Data System (ADS)

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  10. Using pH variations to improve the discrimination of wines by 3D front face fluorescence spectroscopy associated to Independent Components Analysis.

    PubMed

    Saad, Rita; Bouveresse, Delphine Jouan-Rimbaud; Locquet, Nathalie; Rutledge, Douglas N

    2016-06-01

    Wine composition in polyphenols is related to the variety of grape that it contains. These polyphenols play an essential role in its quality as well as a possible protective effect on human health. Their conjugated aromatic structure renders them fluorescent, which means that 3D front-face fluorescence spectroscopy could be a useful tool to differentiate among the grape varieties that characterize each wine. However, fluorescence spectra acquired simply at the natural pH of wine are not always sufficient to discriminate the wines. The structural changes in the polyphenols resulting from modifications in the pH induce significant changes in their fluorescence spectra, making it possible to more clearly separate different wines. 9 wines belonging to three different grape varieties (Shiraz, Cabernet Sauvignon and Pinot Noir) and from 9 different producers, were analyzed over a range of pHs. Independent Components Analysis (ICA) was used to extract characteristic signals from the matrix of unfolded 3D front-face fluorescence spectra and showed that the introduction of pH as an additional parameter in the study of wine fluorescence improved the discrimination of wines.

  11. Independent components analysis coupled with 3D-front-face fluorescence spectroscopy to study the interaction between plastic food packaging and olive oil.

    PubMed

    Kassouf, Amine; El Rakwe, Maria; Chebib, Hanna; Ducruet, Violette; Rutledge, Douglas N; Maalouly, Jacqueline

    2014-08-11

    Olive oil is one of the most valued sources of fats in the Mediterranean diet. Its storage was generally done using glass or metallic packaging materials. Nowadays, plastic packaging has gained worldwide spread for the storage of olive oil. However, plastics are not inert and interaction phenomena may occur between packaging materials and olive oil. In this study, extra virgin olive oil samples were submitted to accelerated interaction conditions, in contact with polypropylene (PP) and polylactide (PLA) plastic packaging materials. 3D-front-face fluorescence spectroscopy, being a simple, fast and non destructive analytical technique, was used to study this interaction. Independent components analysis (ICA) was used to analyze raw 3D-front-face fluorescence spectra of olive oil. ICA was able to highlight a probable effect of a migration of substances with antioxidant activity. The signals extracted by ICA corresponded to natural olive oil fluorophores (tocopherols and polyphenols) as well as newly formed ones which were tentatively identified as fluorescent oxidation products. Based on the extracted fluorescent signals, olive oil in contact with plastics had slower aging rates in comparison with reference oils. Peroxide and free acidity values validated the results obtained by ICA, related to olive oil oxidation rates. Sorbed olive oil in plastic was also quantified given that this sorption could induce a swelling of the polymer thus promoting migration.

  12. Improved face representation by nonuniform multilevel selection of Gabor convolution features.

    PubMed

    Du, Shan; Ward, Rabab Kreidieh

    2009-12-01

    Gabor wavelets are widely employed in face representation to decompose face images into their spatial-frequency domains. The Gabor wavelet transform, however, introduces very high dimensional data. To reduce this dimensionality, uniform sampling of Gabor features has traditionally been used. Since uniform sampling equally treats all the features, it can lead to a loss of important features while retaining trivial ones. In this paper, we propose a new face representation method that employs nonuniform multilevel selection of Gabor features. The proposed method is based on the local statistics of the Gabor features and is implemented using a coarse-to-fine hierarchical strategy. Gabor features that correspond to important face regions are automatically selected and sampled finer than other features. The nonuniformly extracted Gabor features are then classified using principal component analysis and/or linear discriminant analysis for the purpose of face recognition. To verify the effectiveness of the proposed method, experiments have been conducted on benchmark face image databases where the images vary in illumination, expression, pose, and scale. Compared with the methods that use the original gray-scale image with 4096-dimensional data and uniform sampling with 2560-dimensional data, the proposed method results in a significantly higher recognition rate, with a substantial lower dimension of around 700. The experimental results also show that the proposed method works well not only when multiple sample images are available for training but also when only one sample image is available for each person. The proposed face representation method has the advantages of low complexity, low dimensionality, and high discriminance.

  13. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  14. Low-rank and eigenface based sparse representation for face recognition.

    PubMed

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method.

  15. Low-Rank and Eigenface Based Sparse Representation for Face Recognition

    PubMed Central

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method. PMID:25334027

  16. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  17. Representations of individuals in ventral temporal cortex defined by faces and biographies.

    PubMed

    Verosky, Sara C; Todorov, Alexander; Turk-Browne, Nicholas B

    2013-09-01

    The fusiform gyrus responds more strongly to faces than to other categories of objects. This response could reflect either categorical detection of faces or recognition of particular facial identities. Recent fMRI studies have attempted to address the question of what information is encoded in these regions, but have reported mixed results. We tested whether the creation of richer identity representations via training on visual and social information, and the use of an adaptation design, would reveal more robust representations of these identities in ventral temporal cortex. Examining the patterns of activation across voxels in bilateral fusiform gyri, we identified unique patterns for particular identities. Attaching distinctive biographical information to identities did not increase the strength of these representations, but did produce a grouping effect: faces associated with the same amount of biographical information were represented more similarly to each other. These results are consistent with the possibility that identity exemplars are represented in posterior visual areas best known for their role in representing categorical information, and suggest that these areas may be sensitive to some forms of non-visual information, including from the social domain.

  18. Conscious and Non-conscious Representations of Emotional Faces in Asperger's Syndrome.

    PubMed

    Chien, Vincent S C; Tsai, Arthur C; Yang, Han Hsuan; Tseng, Yi-Li; Savostyanov, Alexander N; Liou, Michelle

    2016-07-31

    Several neuroimaging studies have suggested that the low spatial frequency content in an emotional face mainly activates the amygdala, pulvinar, and superior colliculus especially with fearful faces(1-3). These regions constitute the limbic structure in non-conscious perception of emotions and modulate cortical activity either directly or indirectly(2). In contrast, the conscious representation of emotions is more pronounced in the anterior cingulate, prefrontal cortex, and somatosensory cortex for directing voluntary attention to details in faces(3,4). Asperger's syndrome (AS)(5,6) represents an atypical mental disturbance that affects sensory, affective and communicative abilities, without interfering with normal linguistic skills and intellectual ability. Several studies have found that functional deficits in the neural circuitry important for facial emotion recognition can partly explain social communication failure in patients with AS(7-9). In order to clarify the interplay between conscious and non-conscious representations of emotional faces in AS, an EEG experimental protocol is designed with two tasks involving emotionality evaluation of either photograph or line-drawing faces. A pilot study is introduced for selecting face stimuli that minimize the differences in reaction times and scores assigned to facial emotions between the pretested patients with AS and IQ/gender-matched healthy controls. Information from the pretested patients was used to develop the scoring system used for the emotionality evaluation. Research into facial emotions and visual stimuli with different spatial frequency contents has reached discrepant findings depending on the demographic characteristics of participants and task demands(2). The experimental protocol is intended to clarify deficits in patients with AS in processing emotional faces when compared with healthy controls by controlling for factors unrelated to recognition of facial emotions, such as task difficulty, IQ and

  19. Feature-based face representations and image reconstruction from behavioral and neural data

    PubMed Central

    Nestor, Adrian; Plaut, David C.; Behrmann, Marlene

    2016-01-01

    The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach. PMID:26711997

  20. Creating Virtual-hand and Virtual-face Illusions to Investigate Self-representation.

    PubMed

    Ma, Ke; Lippelt, Dominique P; Hommel, Bernhard

    2017-03-01

    Studies investigating how people represent themselves and their own body often use variants of "ownership illusions", such as the traditional rubber-hand illusion or the more recently discovered enfacement illusion. However, these examples require rather artificial experimental setups, in which the artificial effector needs to be stroked in synchrony with the participants' real hand or face-a situation in which participants have no control over the stroking or the movements of their real or artificial effector. Here, we describe a technique to establish ownership illusions in a setup that is more realistic, more intuitive, and of presumably higher ecological validity. It allows creating the virtual-hand illusion by having participants control the movements of a virtual hand presented on a screen or in virtual space in front of them. If the virtual hand moves in synchrony with the participants' own real hand, they tend to perceive the virtual hand as part of their own body. The technique also creates the virtual-face illusion by having participants control the movements of a virtual face in front of them, again with the effect that they tend to perceive the face as their own if it moves in synchrony with their real face. Studying the circumstances that illusions of this sort can be created, increased, or reduced provides important information about how people create and maintain representations of themselves.

  1. Attachment representation modulates oxytocin effects on the processing of own-child faces in fathers.

    PubMed

    Waller, Christiane; Wittfoth, Matthias; Fritzsche, Konstantin; Timm, Lydia; Wittfoth-Schardt, Dina; Rottler, Edit; Heinrichs, Markus; Buchheim, Anna; Kiefer, Markus; Gündel, Harald

    2015-12-01

    Oxytocin (OT) plays a crucial role in parental-infant bonding and attachment. Recent functional imaging studies reveal specific attachment and reward related brain regions in individuals or within the parent-child dyad. However, the time course and functional stage of modulatory effects of OT on attachment-related processing, especially in fathers, are poorly understood. To elucidate the functional and neural mechanisms underlying the role of OT in paternal-child attachment, we performed an event-related potential study in 24 healthy fathers who received intranasal OT in a double-blind, placebo-controlled, within-subject experimental design. Participants passively viewed pictures of their own child (oC), a familiar (fC) and an unfamiliar child (ufC) while event-related potentials were recorded. Familiarity of the child's face modulated a broad negativity at occipital and temporo-parietal electrodes within a time window of 300-400ms, presumably reflecting a modulation of the N250 and N300 ERP components. The oC condition elicited a more negative potential compared to the other familiarity conditions suggesting different activation of perceptual memory representations and assignment of emotional valence. Most importantly, this familiarity effect was only observed under placebo (PL) and was abolished under OT, in particular at left temporo-parietal electrodes. This OT induced attenuation of ERP responses was related to habitual attachment representations in fathers. In summary, our results demonstrate an OT-specific effect at later stages of attachment-related face processing presumably reflecting both activation of perceptual memory representations and assignment of emotional value.

  2. Exploitation of 3D face-centered cubic mesoporous silica as a carrier for a poorly water soluble drug: influence of pore size on release rate.

    PubMed

    Zhu, Wenquan; Wan, Long; Zhang, Chen; Gao, Yikun; Zheng, Xin; Jiang, Tongying; Wang, Siling

    2014-01-01

    The purposes of the present work were to explore the potential application of 3D face-centered cubic mesoporous silica (FMS) with pore size of 16.0nm as a delivery system for poorly soluble drugs and investigate the effect of pore size on the dissolution rate. FMS with different pore sizes (16.0, 6.9 and 3.7nm) was successfully synthesized by using Pluronic block co-polymer F127 as a template and adjusting the reaction temperatures. Celecoxib (CEL), which is a BCS class II drug, was used as a model drug and loaded into FMS with different pore sizes by the solvent deposition method at a drug-silica ratio of 1:4. Characterization using scanning electron microscopy (SEM), transmission electron microscopy (TEM), Fourier transformation infrared spectroscopy (FT-IR), thermogravimetric analysis (TGA), nitrogen adsorption, X-ray diffraction (XRD), and differential scanning calorimetry (DSC) was used to systematically investigate the drug loading process. The results obtained showed that CEL was in a non-crystalline state after incorporation of CEL into the pores of FMS-15 with pore size of 16.0nm. In vitro dissolution was carried out to demonstrate the effects of FMS with different pore sizes on the release of CEL. The results obtained indicated that the dissolution rate of CEL from FMS-15 was significantly enhanced compared with pure CEL. This could be explained by supposing that CEL encountered less diffusion resistance and its crystallinity decreased due to the large pore size of 16.0nm and the nanopore channels of FMS-15. Moreover, drug loading and pore size both play an important role in enhancing the dissolution properties for the poorly water-soluble drugs. As the pore size between 3.7 and 16.0nm increased, the dissolution rate of CEL from FMS gradually increased.

  3. Judging Normality and Attractiveness in Faces: Direct Evidence of a More Refined Representation for Own-Race, Young Adult Faces.

    PubMed

    Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J

    2016-09-01

    Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition.

  4. Band-Reweighed Gabor Kernel Embedding for Face Image Representation and Recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Li, Xiao-Xin; Lai, Zhao-Rong

    2014-02-01

    Face recognition with illumination or pose variation is a challenging problem in image processing and pattern recognition. A novel algorithm using band-reweighed Gabor kernel embedding to deal with the problem is proposed in this paper. For a given image, it is first transformed by a group of Gabor filters, which output Gabor features using different orientation and scale parameters. Fisher scoring function is used to measure the importance of features in each band, and then, the features with the largest scores are preserved for saving memory requirements. The reduced bands are combined by a vector, which is determined by a weighted kernel discriminant criterion and solved by a constrained quadratic programming method, and then, the weighted sum of these nonlinear bands is defined as the similarity between two images. Compared with existing concatenation-based Gabor feature representation and the uniformly weighted similarity calculation approaches, our method provides a new way to use Gabor features for face recognition and presents a reasonable interpretation for highlighting discriminant orientations and scales. The minimum Mahalanobis distance considering the spatial correlations within the data is exploited for feature matching, and the graphical lasso is used therein for directly estimating the sparse inverse covariance matrix. Experiments using benchmark databases show that our new algorithm improves the recognition results and obtains competitive performance.

  5. Unilateral nasal obstruction affects motor representation development within the face primary motor cortex in growing rats.

    PubMed

    Abe, Yasunori; Kato, Chiho; Uchima Koecklin, Karin Harumi; Okihara, Hidemasa; Ishida, Takayoshi; Fujita, Koichi; Yabushita, Tadachika; Kokai, Satoshi; Ono, Takashi

    2017-03-23

    Postnatal growth is influenced by genetic and environmental factors. Nasal obstruction during growth alters the electromyographic activity of orofacial muscles. The facial primary motor area represents muscles of the tongue and jaw, which are essential in regulating orofacial motor functions, including chewing and jaw opening. This study aimed to evaluate the effect of chronic unilateral nasal obstruction during growth on the motor representations within the face primary motor cortex (M1). Seventy-two 6-day-old male Wistar rats were randomly divided into control (n = 36) and experimental (n = 36) groups. Rats in the experimental group underwent unilateral nasal obstruction after cauterization of the external nostril at 8 days of age. Intracortical microstimulation (ICMS) mapping was performed when the rats were 5, 7, 9, and 11 weeks old in control and experimental groups (n = 9 per group per time point). Repeated-measures multivariate analysis of variance was used for intergroup and intragroup statistical comparisons. In the control and experimental groups, the total number of positive ICMS sites for the genioglossus and anterior digastric muscles was significantly higher at 5, 7, and 9 weeks, but there was no significant difference between 9 and 11 weeks of age. Moreover, the total number of positive ICMS sites was significantly smaller in the experimental group than in the control at each age. It is possible that nasal obstruction induced the initial changes in orofacial motor behavior in response to the altered respiratory pattern, which eventually contributed to face-M1 neuroplasticity.

  6. Individuation training with other-race faces reduces preschoolers' implicit racial bias: a link between perceptual and social representation of faces in children.

    PubMed

    Xiao, Wen S; Fu, Genyue; Quinn, Paul C; Qin, Jinliang; Tanaka, James W; Pascalis, Olivier; Lee, Kang

    2015-07-01

    The present study examined whether perceptual individuation training with other-race faces could reduce preschool children's implicit racial bias. We used an 'angry = outgroup' paradigm to measure Chinese children's implicit racial bias against African individuals before and after training. In Experiment 1, children between 4 and 6 years were presented with angry or happy racially ambiguous faces that were morphed between Chinese and African faces. Initially, Chinese children demonstrated implicit racial bias: they categorized happy racially ambiguous faces as own-race (Chinese) and angry racially ambiguous faces as other-race (African). Then, the children participated in a training session where they learned to individuate African faces. Children's implicit racial bias was significantly reduced after training relative to that before training. Experiment 2 used the same procedure as Experiment 1, except that Chinese children were trained with own-race Chinese faces. These children did not display a significant reduction in implicit racial bias. Our results demonstrate that early implicit racial bias can be reduced by presenting children with other-race face individuation training, and support a linkage between perceptual and social representations of face information in children.

  7. Identification of own-race and other-race faces: implications for the representation of race in face space.

    PubMed

    Byatt, Graham; Rhodes, Gillian

    2004-08-01

    Own-race faces are recognized more easily than faces of a different, unfamiliar race. According to the multidimensional space (MDS) framework, the poor discriminability of other-race faces is due to their being more densely clustered in face space than own-race faces. Multidimensional scaling analyses of similarity ratings (Caucasian participants, n = 22) showed that other-race (Chinese) faces are more densely clustered in face space. We applied a formal model to test whether the spatial location of face stimuli could account for identification accuracy of another group of Caucasian participants (n = 30). As expected, own-race (Caucasian) faces were identified more accurately (higher hit rate, lower false alarms, and higher A') than other-race faces, which were more densely clustered than own-race faces. A quantitative model successfully predicted identification performance from the spatial locations of the stimuli. The results are discussed in relation to the standard MDS account of race effects and also an alternative "race-feature" hypothesis.

  8. Distinct representations of configural and part information across multiple face-selective regions of the human brain

    PubMed Central

    Golarai, Golijeh; Ghahremani, Dara G.; Eberhardt, Jennifer L.; Gabrieli, John D. E.

    2015-01-01

    Several regions of the human brain respond more strongly to faces than to other visual stimuli, such as regions in the amygdala (AMG), superior temporal sulcus (STS), and the fusiform face area (FFA). It is unclear if these brain regions are similar in representing the configuration or natural appearance of face parts. We used functional magnetic resonance imaging of healthy adults who viewed natural or schematic faces with internal parts that were either normally configured or randomly rearranged. Response amplitudes were reduced in the AMG and STS when subjects viewed stimuli whose configuration of parts were digitally rearranged, suggesting that these regions represent the 1st order configuration of face parts. In contrast, response amplitudes in the FFA showed little modulation whether face parts were rearranged or if the natural face parts were replaced with lines. Instead, FFA responses were reduced only when both configural and part information were reduced, revealing an interaction between these factors, suggesting distinct representation of 1st order face configuration and parts in the AMG and STS vs. the FFA. PMID:26594191

  9. Recognizing identity in the face of change: the development of an expression-independent representation of facial identity.

    PubMed

    Mian, Jasmine F; Mondloch, Catherine J

    2012-07-30

    Perceptual aftereffects have indicated that there is an asymmetry in the extent to which adults' representations of identity and expression are independent of one another. Their representation of expression is identity-dependent; the magnitude of expression aftereffects is reduced when the adaptation and test stimuli have different identities. In contrast, their representation of identity is expression-independent; the magnitude of identity aftereffects is independent of whether the adaptation and test stimuli pose the same expressions. Like adults, children's representation of expression is identity-dependent (Vida & Mondloch, 2009). Here we investigated whether they have an expression-dependent representation of facial identity. Adults and 8-year-olds (n = 20 per group) categorized faces in an identity continuum (Sue/Jen) after viewing an adapting stimulus that displayed the same or a different emotional expression. Both groups showed identity aftereffects that were not influenced by facial expression. We conclude that, like adults, 8-year-old children's representation of identity is expression-independent.

  10. Categorization, categorical perception, and asymmetry in infants' representation of face race

    PubMed Central

    Anzures, Gizelle; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Lee, Kang

    2013-01-01

    The present study examined whether 6- and 9-month-old Caucasian infants could categorize faces according to race. In Experiment 1, infants were familiarized with different female faces from a common ethnic background (i.e. either Caucasian or Asian) and then tested with female faces from a novel race category. Nine-month-olds were able to form discrete categories of Caucasian and Asian faces. However, 6-month-olds did not form discrete categories of faces based on race. In Experiment 2, a second group of 6- and 9-month-olds was tested to determine whether they could discriminate between different faces from the same race category. Results showed that both age groups could only discriminate between different faces from the own-race category of Caucasian faces. The findings of the two experiments taken together suggest that 9-month-olds formed a category of Caucasian faces that are further differentiated at the individual level. In contrast, although they could form a category of Asian faces, they could not discriminate between such other-race faces. This asymmetry in category formation at 9 months (i.e. categorization of own-race faces vs. categorical perception of other-race faces) suggests that differential experience with own- and other-race faces plays an important role in infants' acquisition of face processing abilities. PMID:20590720

  11. Task-Specific Codes for Face Recognition: How they Shape the Neural Representation of Features for Detection and Individuation

    PubMed Central

    2008-01-01

    Background The variety of ways in which faces are categorized makes face recognition challenging for both synthetic and biological vision systems. Here we focus on two face processing tasks, detection and individuation, and explore whether differences in task demands lead to differences both in the features most effective for automatic recognition and in the featural codes recruited by neural processing. Methodology/Principal Findings Our study appeals to a computational framework characterizing the features representing object categories as sets of overlapping image fragments. Within this framework, we assess the extent to which task-relevant information differs across image fragments. Based on objective differences we find among task-specific representations, we test the sensitivity of the human visual system to these different face descriptions independently of one another. Both behavior and functional magnetic resonance imaging reveal effects elicited by objective task-specific levels of information. Behaviorally, recognition performance with image fragments improves with increasing task-specific information carried by different face fragments. Neurally, this sensitivity to the two tasks manifests as differential localization of neural responses across the ventral visual pathway. Fragments diagnostic for detection evoke larger neural responses than non-diagnostic ones in the right posterior fusiform gyrus and bilaterally in the inferior occipital gyrus. In contrast, fragments diagnostic for individuation evoke larger responses than non-diagnostic ones in the anterior inferior temporal gyrus. Finally, for individuation only, pattern analysis reveals sensitivity to task-specific information within the right “fusiform face area”. Conclusions/Significance Our results demonstrate: 1) information diagnostic for face detection and individuation is roughly separable; 2) the human visual system is independently sensitive to both types of information; 3) neural

  12. The Development of Sex Category Representation in Infancy: Matching of Faces and Bodies

    ERIC Educational Resources Information Center

    Hock, Alyson; Kangas, Ashley; Zieber, Nicole; Bhatt, Ramesh S.

    2015-01-01

    Sex is a significant social category, and adults derive information about it from both faces and bodies. Research indicates that young infants process sex category information in faces. However, no prior study has examined whether infants derive sex categories from bodies and match faces and bodies in terms of sex. In the current study,…

  13. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  14. Simulation of hip fracture in sideways fall using a 3D finite element model of pelvis-femur-soft tissue complex with simplified representation of whole body.

    PubMed

    Majumder, Santanu; Roychowdhury, Amit; Pal, Subrata

    2007-12-01

    Hip fractures due to sideways falls are a worldwide health problem, especially among the elderly population. The objective of this study was to simulate a real life sideways fall leading to hip fracture. To achieve this a computed tomography (CT) scan based three-dimensional (3D) finite element (FE) model of the pelvis-femur complex was developed using a wide range of mechanical properties in the bone of the complex. For impact absorption through large deformation, surrounding soft tissue was also included in the FE model from CT scan data. To incorporate the inertia effect, the whole body was represented by a spring-mass-dashpot system. For trochanteric soft tissue thickness of 14 mm, body weight of 77.47 kg and average hip impact velocity of 3.17 m/s, this detailed FE model could approximately simulate a sideways fall configuration and examine femoral fracture situation. At the contact surface, the peak impact load was 8331 N. In spite of the presence of 14 mm thick trochanteric soft tissue, within the trochanteric zone the most compressive peak principal strain was 3.5% which exceeds ultimate compressive strain. The modeled trochanteric fracture was consistent with clinical findings and with the findings of previous studies. Further, this detailed FE model may be used to find the effect of trochanteric soft tissue thickness variations on peak impact force, peak strain in sideways fall, and to simulate automobile side impact and backward fall situations.

  15. Angry expressions strengthen the encoding and maintenance of face identity representations in visual working memory.

    PubMed

    Jackson, Margaret C; Linden, David E J; Raymond, Jane E

    2014-01-01

    Visual working memory (WM) for face identities is enhanced when faces express negative versus positive emotion. To determine the stage at which emotion exerts its influence on memory for person information, we isolated expression (angry/happy) to the encoding phase (Experiment 1; neutral test faces) or retrieval phase (Experiment 2; neutral study faces). WM was only enhanced by anger when expression was present at encoding, suggesting that retrieval mechanisms are not influenced by emotional expression. To examine whether emotional information is discarded on completion of encoding or sustained in WM, in Experiment 3 an emotional word categorisation task was inserted into the maintenance interval. Emotional congruence between word and face supported memory for angry but not for happy faces, suggesting that negative emotional information is preferentially sustained during WM maintenance. Our findings demonstrate that negative expressions exert sustained and beneficial effects on WM for faces that extend beyond encoding.

  16. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study.

    PubMed

    Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine

    2009-05-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between

  17. Prism Trees: An Efficient Representation for Manipulating and Displaying Polyhedra with Many Faces.

    DTIC Science & Technology

    1985-04-01

    then ptrocess them to find the intersection polygons of the two polyheclr;, and to determine whether or not all the other faces are part of the... designed to deal explicitly with such polyhedra. They use "face" hierarchies t.at enclose the faces themselves in boxes. We present a new method...transforma;ions. A last, but important remark: the approximation algorithm has been designed for . *’ surfaces of genus 0, which is an unfortunate

  18. The Representation and Processing of Familiar Faces in Dyslexia: Differences in Age of Acquisition Effects

    ERIC Educational Resources Information Center

    Smith-Spark, James H.; Moore, Viv

    2009-01-01

    Two under-explored areas of developmental dyslexia research, face naming and age of acquisition (AoA), were investigated. Eighteen dyslexic and 18 non-dyslexic university students named the faces of 50 well-known celebrities, matched for facial distinctiveness and familiarity. Twenty-five of the famous people were learned early in life, while the…

  19. The Representation of Information about Faces in the Temporal and Frontal Lobes

    ERIC Educational Resources Information Center

    Rolls, Edmund T.

    2007-01-01

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed…

  20. The Development of Sex Category Representation in Infancy: Matching of Faces and Bodies

    PubMed Central

    Hock, Alyson; Kangas, Ashley; Zieber, Nicole; Bhatt, Ramesh S.

    2016-01-01

    Sex is a significant social category, and adults derive information about it from both faces and bodies. Research indicates that young infants process sex category information in faces. However, no prior study has examined whether infants derive sex categories from bodies and match faces and bodies in terms of sex. In the current study, 5-month-olds exhibited a preference between sex congruent (face and body of the same sex) versus sex-incongruent (face and body belonging to different genders) images. In contrast, 3.5-month-olds failed to exhibit a preference. Thus, 5-month-olds process sex information from bodies and match it to facial information. However, younger infants’ failure to match suggests that there is a developmental change between 3.5 and 5 months of age in the processing of sex categories. These results indicate that rapid developmental changes lead to fairly sophisticated social information processing quite early in life. PMID:25621754

  1. Electrophysiological Correlates of Refreshing: Event-related Potentials Associated with Directing Reflective Attention to Face, Scene, or Word Representations.

    PubMed

    Johnson, Matthew R; McCarthy, Gregory; Muller, Kathleen A; Brudner, Samuel N; Johnson, Marcia K

    2015-09-01

    Refreshing is the component cognitive process of directing reflective attention to one of several active mental representations. Previous studies using fMRI suggested that refresh tasks involve a component process of initiating refreshing as well as the top-down modulation of representational regions central to refreshing. However, those studies were limited by fMRI's low temporal resolution. In this study, we used EEG to examine the time course of refreshing on the scale of milliseconds rather than seconds. ERP analyses showed that a typical refresh task does have a distinct electrophysiological response as compared to a control condition and includes at least two main temporal components: an earlier (∼400 msec) positive peak reminiscent of a P3 response and a later (∼800-1400 msec) sustained positivity over several sites reminiscent of the late directing attention positivity. Overall, the evoked potentials for refreshing representations from three different visual categories (faces, scenes, words) were similar, but multivariate pattern analysis showed that some category information was nonetheless present in the EEG signal. When related to previous fMRI studies, these results are consistent with a two-phase model, with the first phase dominated by frontal control signals involved in initiating refreshing and the second by the top-down modulation of posterior perceptual cortical areas that constitutes refreshing a representation. This study also lays the foundation for future studies of the neural correlates of reflective attention at a finer temporal resolution than is possible using fMRI.

  2. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  3. Under-Representation of Males in the Early Years: The Challenges Leaders Face

    ERIC Educational Resources Information Center

    Mistry, Malini; Sood, Krishan

    2013-01-01

    This article investigates why there appears to be an under-representation of males in comparison to their female colleagues in the Early Years (EY) sector, and the perception of male teachers progressing more quickly to leadership positions when they do enter this context. Using case studies of final year male students on an Initial Teacher…

  4. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  5. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  6. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  7. Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition.

    PubMed

    Zhang, Baochang; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-01-01

    A novel object descriptor, histogram of Gabor phase pattern (HGPP), is proposed for robust face recognition. In HGPP, the quadrant-bit codes are first extracted from faces based on the Gabor transformation. Global Gabor phase pattern (GGPP) and local Gabor phase pattern (LGPP) are then proposed to encode the phase variations. GGPP captures the variations derived from the orientation changing of Gabor wavelet at a given scale (frequency), while LGPP encodes the local neighborhood variations by using a novel local XOR pattern (LXP) operator. They are both divided into the nonoverlapping rectangular regions, from which spatial histograms are extracted and concatenated into an extended histogram feature to represent the original image. Finally, the recognition is performed by using the nearest-neighbor classifier with histogram intersection as the similarity measurement. The features of HGPP lie in two aspects: 1) HGPP can describe the general face images robustly without the training procedure; 2) HGPP encodes the Gabor phase information, while most previous face recognition methods exploit the Gabor magnitude information. In addition, Fisher separation criterion is further used to improve the performance of HGPP by weighing the subregions of the image according to their discriminative powers. The proposed methods are successfully applied to face recognition, and the experiment results on the large-scale FERET and CAS-PEAL databases show that the proposed algorithms significantly outperform other well-known systems in terms of recognition rate.

  8. Common and unique representations in pFC for face and place attractiveness.

    PubMed

    Pegors, Teresa K; Kable, Joseph W; Chatterjee, Anjan; Epstein, Russell A

    2015-05-01

    Although previous neuroimaging research has identified overlapping correlates of subjective value across different reward types in the ventromedial pFC (vmPFC), it is not clear whether this "common currency" evaluative signal extends to the aesthetic domain. To examine this issue, we scanned human participants with fMRI while they made attractiveness judgments of faces and places-two stimulus categories that are associated with different underlying rewards, have very different visual properties, and are rarely compared with each other. We found overlapping signals for face and place attractiveness in the vmPFC, consistent with the idea that this region codes a signal for value that applies across disparate reward types and across both economic and aesthetic judgments. However, we also identified a subregion of vmPFC within which activity patterns for face and place attractiveness were distinguishable, suggesting that some category-specific attractiveness information is retained in this region. Finally, we observed two separate functional regions in lateral OFC: one region that exhibited a category-unique response to face attractiveness and another region that responded strongly to faces but was insensitive to their value. Our results suggest that vmPFC supports a common mechanism for reward evaluation while also retaining a degree of category-specific information, whereas lateral OFC may be involved in basic reward processing that is specific to only some stimulus categories.

  9. Political attitudes bias the mental representation of a presidential candidate's face.

    PubMed

    Young, Alison I; Ratner, Kyle G; Fazio, Russell H

    2014-02-01

    Using a technique known as reverse-correlation image classification, we demonstrated that the face of Mitt Romney as represented in people's minds varies as a function of their attitudes toward Mitt Romney. Our findings provide evidence that attitudes bias how people see something as concrete and well learned as the face of a political candidate during an election. Practically, our findings imply that citizens may not merely interpret political information about a candidate to fit their opinion, but also may construct a political world in which they literally see candidates differently.

  10. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  11. 3D Stereo Data Visualization and Representation

    DTIC Science & Technology

    1994-09-01

    addition, the state of our minds, our psycological make-up, and human factors play a very important role in this process. 2.2.1.1 Ambient Mode and...year, if available (e.g. . limitations or special markings in all capitals (e.g. I Jan 88). Must cite at least the year. NOFORN, REL, ITAR). Block 3

  12. An evolution from 3D face-centered-cubic ZnSnO3 nanocubes to 2D orthorhombic ZnSnO3 nanosheets with excellent gas sensing performance.

    PubMed

    Chen, Yuejiao; Yu, Ling; Li, Qing; Wu, Yan; Li, Qiuhong; Wang, Taihong

    2012-10-19

    We have successfully observed the development of three-dimensional (3D) face-centered-cubic ZnSnO(3) into two-dimensional (2D) orthorhombic ZnSnO(3) nanosheets, which is the first observation of 2D ZnSnO(3) nanostructures to date. The synthesis from 3D to 2D nanostructures is realized by the dual-hydrolysis-assisted liquid precipitation reaction and subsequent hydrothermal treatment. The time-dependent morphology indicates the transformation via a 'dissolution-recrystallization' mechanism, accompanied by a 'further growth' process. Furthermore, the 2D ZnSnO(3) nanosheets consist of smaller sized nanoflakes. This further increases the special specific surface area and facilitates their application in gas sensing. The 2D ZnSnO(3) nanosheets exhibit excellent gas sensing properties, especially through their ultra-fast response and recovery. When exposed to ethanol and acetone, the response rate is as fast as 0.26 s and 0.18 s, respectively, and the concentration limit can reach as low as 50 ppb of ethanol. All these results are much better than those reported so far. Our experimental results indicate an efficient approach to realize high-performance gas sensors.

  13. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  14. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  15. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  16. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  17. Integration of real-time 3D image acquisition and multiview 3D display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  18. Self assembled structures for 3D integration

    NASA Astrophysics Data System (ADS)

    Rao, Madhav

    Three dimensional (3D) micro-scale structures attached to a silicon substrate have various applications in microelectronics. However, formation of 3D structures using conventional micro-fabrication techniques are not efficient and require precise control of processing parameters. Self assembly is a method for creating 3D structures that takes advantage of surface area minimization phenomena. Solder based self assembly (SBSA), the subject of this dissertation, uses solder as a facilitator in the formation of 3D structures from 2D patterns. Etching a sacrificial layer underneath a portion of the 2D pattern allows the solder reflow step to pull those areas out of the substrate plane resulting in a folded 3D structure. Initial studies using the SBSA method demonstrated low yields in the formation of five different polyhedra. The failures in folding were primarily attributed to nonuniform solder deposition on the underlying metal pads. The dip soldering method was analyzed and subsequently refined. A modified dip soldering process provided improved yield among the polyhedra. Solder bridging referred as joining of solder deposited on different metal patterns in an entity influenced the folding mechanism. In general, design parameters such as small gap-spacings and thick metal pads were found to favor solder bridging for all patterns studied. Two types of soldering: face and edge soldering were analyzed. Face soldering refers to the application of solder on the entire metal face. Edge soldering indicates application of solder only on the edges of the metal face. Mechanical grinding showed that face soldered SBSA structures were void free and robust in nature. In addition, the face soldered 3D structures provide a consistent heat resistant solder standoff height that serve as attachments in the integration of dissimilar electronic technologies. Face soldered 3D structures were developed on the underlying conducting channel to determine the thermo-electric reliability of

  19. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  20. Intracortical and Thalamocortical Connections of the Hand and Face Representations in Somatosensory Area 3b of Macaque Monkeys and Effects of Chronic Spinal Cord Injuries.

    PubMed

    Chand, Prem; Jain, Neeraj

    2015-09-30

    Brains of adult monkeys with chronic lesions of dorsal columns of spinal cord at cervical levels undergo large-scale reorganization. Reorganization results in expansion of intact chin inputs, which reactivate neurons in the deafferented hand representation in the primary somatosensory cortex (area 3b), ventroposterior nucleus of the thalamus and cuneate nucleus of the brainstem. A likely contributing mechanism for this large-scale plasticity is sprouting of axons across the hand-face border. Here we determined whether such sprouting takes place in area 3b. We first determined the extent of intrinsic corticocortical connectivity between the hand and the face representations in normal area 3b. Small amounts of neuroanatomical tracers were injected in these representations close to the electrophysiologically determined hand-face border. Locations of the labeled neurons were mapped with respect to the detailed electrophysiological somatotopic maps and histologically determined hand-face border revealed in sections of the flattened cortex stained for myelin. Results show that intracortical projections across the hand-face border are few. In monkeys with chronic unilateral lesions of the dorsal columns and expanded chin representation, connections across the hand-face border were not different compared with normal monkeys. Thalamocortical connections from the hand and face representations in the ventroposterior nucleus to area 3b also remained unaltered after injury. The results show that sprouting of intrinsic connections in area 3b or the thalamocortical inputs does not contribute to large-scale cortical plasticity. Significance statement: Long-term injuries to dorsal spinal cord in adult primates result in large-scale somatotopic reorganization due to which chin inputs expand into the deafferented hand region. Reorganization takes place in multiple cortical areas, and thalamic and medullary nuclei. To what extent this brain reorganization due to dorsal column injuries

  1. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  2. Quon 3D language for quantum information

    PubMed Central

    Liu, Zhengwei; Wozniakowski, Alex; Jaffe, Arthur M.

    2017-01-01

    We present a 3D topological picture-language for quantum information. Our approach combines charged excitations carried by strings, with topological properties that arise from embedding the strings in the interior of a 3D manifold with boundary. A quon is a composite that acts as a particle. Specifically, a quon is a hemisphere containing a neutral pair of open strings with opposite charge. We interpret multiquons and their transformations in a natural way. We obtain a type of relation, a string–genus “joint relation,” involving both a string and the 3D manifold. We use the joint relation to obtain a topological interpretation of the C∗-Hopf algebra relations, which are widely used in tensor networks. We obtain a 3D representation of the controlled NOT (CNOT) gate that is considerably simpler than earlier work, and a 3D topological protocol for teleportation. PMID:28167790

  3. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  4. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  5. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  6. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  7. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  8. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  9. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    PubMed

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  10. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  11. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  12. To What Degree Does Handling Concrete Molecular Models Promote the Ability to Translate and Coordinate between 2D and 3D Molecular Structure Representations? A Case Study with Algerian Students

    ERIC Educational Resources Information Center

    Mohamed-Salah, Boukhechem; Alain, Dumon

    2016-01-01

    This study aims to assess whether the handling of concrete ball-and-stick molecular models promotes translation between diagrammatic representations and a concrete model (or vice versa) and the coordination of the different types of structural representations of a given molecular structure. Forty-one Algerian undergraduate students were requested…

  13. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  14. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  15. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  16. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  17. Development of 3D video and 3D data services for T-DMB

    NASA Astrophysics Data System (ADS)

    Yun, Kugjin; Lee, Hyun; Hur, Namho; Kim, Jinwoong

    2008-02-01

    In this paper, we present motivation, system concept, and implementation details of stereoscopic 3D visual services on T-DMB. We have developed two types of 3D visual service : one is '3D video service', which provides 3D depth feeling for a video program by sending left and right view video streams, and the other is '3D data service', which provides presentation of 3D objects overlaid on top of 2D video program. We have developed several highly efficient and sophisticated transmission schemes for the delivery of 3D visual data in order to meet the system requirements such as (1) minimization of bitrate overhead to comply with the strict constraint of T-DMB channel bandwidth; (2) backward and forward compatibility with existing T-DMB; (3) maximize the eye-catching effect of 3D visual representation while reducing eye fatigue. We found that, in contrast to conventional way of providing a stereo version of a program as a whole, the proposed scheme can lead to variety of efficient and effective 3D visual services which can be adapted to many business models.

  18. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  19. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  20. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  1. The neural representation of the gender of faces in the primate visual system: A computer modeling study.

    PubMed

    Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M

    2017-03-01

    We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record

  2. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  3. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Individuation Training with Other-Race Faces Reduces Preschoolers' Implicit Racial Bias: A Link between Perceptual and Social Representation of Faces in Children

    ERIC Educational Resources Information Center

    Xiao, Wen S.; Fu, Genyue; Quinn, Paul C.; Qin, Jinliang; Tanaka, James W.; Pascalis, Olivier; Lee, Kang

    2015-01-01

    The present study examined whether perceptual individuation training with other-race faces could reduce preschool children's implicit racial bias. We used an "angry = outgroup" paradigm to measure Chinese children's implicit racial bias against African individuals before and after training. In Experiment 1, children between 4 and 6 years…

  5. Urbanisation and 3d Spatial - a Geometric Approach

    NASA Astrophysics Data System (ADS)

    Duncan, E. E.; Rahman, A. Abdul

    2013-09-01

    Urbanisation creates immense competition for space, this may be attributed to an increase in population owing to domestic and external tourism. Most cities are constantly exploring all avenues in maximising its limited space. Hence, urban or city authorities need to plan, expand and use such three dimensional (3D) space above, on and below the city space. Thus, difficulties in property ownership and the geometric representation of the 3D city space is a major challenge. This research, investigates the concept of representing a geometric topological 3D spatial model capable of representing 3D volume parcels for man-made constructions above and below the 3D surface volume parcel. A review of spatial data models suggests that the 3D TIN (TEN) model is significant and can be used as a unified model. The concepts, logical and physical models of 3D TIN for 3D volumes using tetrahedrons as the base geometry is presented and implemented to show man-made constructions above and below the surface parcel within a user friendly graphical interface. Concepts for 3D topology and 3D analysis are discussed. Simulations of this model for 3D cadastre are implemented. This model can be adopted by most countries to enhance and streamline geometric 3D property ownership for urban centres. 3D TIN concept for spatial modelling can be adopted for the LA_Spatial part of the Land Administration Domain Model (LADM) (ISO/TC211, 2012), this satisfies the concept of 3D volumes.

  6. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  7. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  8. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  9. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  10. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  11. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  12. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  13. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  14. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  15. 3D Geo: An Alternative Approach

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.

    2016-10-01

    The expression GEO is mostly used to denote relation to the earth. However it should not be confined to what is related to the earth's surface, as other objects also need three dimensional representation and documentation, like cultural heritage objects. They include both tangible and intangible ones. In this paper the 3D data acquisition and 3D modelling of cultural heritage assets are briefly described and their significance is also highlighted. Moreover the organization of such information, related to monuments and artefacts, into relational data bases and its use for various purposes, other than just geometric documentation is also described and presented. In order to help the reader understand the above, several characteristic examples are presented and their methodology explained and their results evaluated.

  16. 3D Visualization of Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Schaefer, John A.

    2014-01-01

    Aerodynamicists and biologists have long recognized the benefits of formation flight. When birds or aircraft fly in the upwash region of the vortex generated by leaders in a formation, induced drag is reduced for the trail bird or aircraft, and efficiency improves. The major consequence of this is that fuel consumption can be greatly reduced. When two aircraft are separated by a large enough longitudinal distance, the aircraft are said to be flying in a cooperative trajectory. A simulation has been developed to model autonomous cooperative trajectories of aircraft; however it does not provide any 3D representation of the multi-body system dynamics. The topic of this research is the development of an accurate visualization of the multi-body system observable in a 3D environment. This visualization includes two aircraft (lead and trail), a landscape for a static reference, and simplified models of the vortex dynamics and trajectories at several locations between the aircraft.

  17. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  20. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  1. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  2. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  3. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  4. Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

    ERIC Educational Resources Information Center

    Wang, Yushun; Zhuang, Yueting

    2008-01-01

    Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…

  5. Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces

    ERIC Educational Resources Information Center

    Hassell, Martin D.; Goyal, Sandeep; Limayem, Moez; Boughzala, Imed

    2012-01-01

    The level of satisfaction and effectiveness of 3D virtual learning environments were examined. Additionally, 3D virtual learning environments were compared with face-to-face learning environments. Students that experienced higher levels of flow and presence also experienced more satisfaction but not necessarily more effectiveness with 3D virtual…

  6. INCORPORATING DYNAMIC 3D SIMULATION INTO PRA

    SciTech Connect

    Steven R Prescott; Curtis Smith

    2011-07-01

    Through continued advancement in computational resources, development that was previously done by trial and error production is now performed through computer simulation. These virtual physical representations have the potential to provide accurate and valid modeling results and are being used in many different technical fields. Risk assessment now has the opportunity to use 3D simulation to improve analysis results and insights, especially for external event analysis. By using simulations, the modeler only has to determine the likelihood of an event without having to also predict the results of that event. The 3D simulation automatically determines not only the outcome of the event, but when those failures occur. How can we effectively incorporate 3D simulation into traditional PRA? Most PRA plant modeling is made up of components with different failure modes, probabilities, and rates. Typically, these components are grouped into various systems and then are modeled together (in different combinations) as a “system” with logic structures to form fault trees. Applicable fault trees are combined through scenarios, typically represented by event tree models. Though this method gives us failure results for a given model, it has limitations when it comes to time-based dependencies or dependencies that are coupled to physical processes which may themselves be space- or time-dependent. Since, failures from a 3D simulation are naturally time related, they should be used in that manner. In our simulation approach, traditional static models are converted into an equivalent state diagram representation with start states, probabilistic driven movements between states and terminal states. As the state model is run repeatedly, it converges to the same results as the PRA model in cases where time-related factors are not important. In cases where timing considerations are important (e.g., when events are dependent upon each other), then the simulation approach will typically

  7. Contacts de langues et representations (Language Contacts and Representations).

    ERIC Educational Resources Information Center

    Matthey, Marinette, Ed.

    1997-01-01

    Essays on language contact and the image of language, entirely in French, include: "Representations 'du' contexte et representations 'en' contexte? Eleves et enseignants face a l'apprentissage de la langue" ("Representations 'of' Context or Representations 'in' Context? Students and Teachers Facing Language Learning" (Laurent…

  8. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. 3D Buckligami: Digital Matter

    NASA Astrophysics Data System (ADS)

    van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin

    2014-03-01

    We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.

  10. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  12. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  13. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  14. Bringing 3D Printing to Geophysical Science Education

    NASA Astrophysics Data System (ADS)

    Boghosian, A.; Turrin, M.; Porter, D. F.

    2014-12-01

    3D printing technology has been embraced by many technical fields, and is rapidly making its way into peoples' homes and schools. While there is a growing educational and hobbyist community engaged in the STEM focused technical and intellectual challenges associated with 3D printing, there is unrealized potential for the earth science community to use 3D printing to communicate scientific research to the public. Moreover, 3D printing offers scientists the opportunity to connect students and the public with novel visualizations of real data. As opposed to introducing terrestrial measurements through the use of colormaps and gradients, scientists can represent 3D concepts with 3D models, offering a more intuitive education tool. Furthermore, the tactile aspect of models make geophysical concepts accessible to a wide range of learning styles like kinesthetic or tactile, and learners including both visually impaired and color-blind students.We present a workflow whereby scientists, students, and the general public will be able to 3D print their own versions of geophysical datasets, even adding time through layering to include a 4th dimension, for a "4D" print. This will enable scientists with unique and expert insights into the data to easily create the tools they need to communicate their research. It will allow educators to quickly produce teaching aids for their students. Most importantly, it will enable the students themselves to translate the 2D representation of geophysical data into a 3D representation of that same data, reinforcing spatial reasoning.

  15. Towards a Normalised 3D Geovisualisation: The Viewpoint Management

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Poux, F.; Hallot, P.; Billen, R.

    2016-10-01

    This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.

  16. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  17. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  18. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  19. 3D visualization of the human cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Zrimec, Tatjana; Mander, Tom; Lambert, Timothy; Parker, Geoffrey

    1995-04-01

    Computer assisted 3D visualization of the human cerebro-vascular system can help to locate blood vessels during diagnosis and to approach them during treatment. Our aim is to reconstruct the human cerebro-vascular system from the partial information collected from a variety of medical imaging instruments and to generate a 3D graphical representation. This paper describes a tool developed for 3D visualization of cerebro-vascular structures. It also describes a symbolic approach to modeling vascular anatomy. The tool, called Ispline, is used to display the graphical information stored in a symbolic model of the vasculature. The vascular model was developed to assist image processing and image fusion. The model consists of a structural symbolic representation using frames and a geometrical representation of vessel shapes and vessel topology. Ispline has proved to be useful for visualizing both the synthetically constructed vessels of the symbolic model and the vessels extracted from a patient's MR angiograms.

  20. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  1. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  2. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  3. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  4. CASTOR3D: linear stability studies for 2D and 3D tokamak equilibria

    NASA Astrophysics Data System (ADS)

    Strumberger, E.; Günter, S.

    2017-01-01

    The CASTOR3D code, which is currently under development, is able to perform linear stability studies for 2D and 3D, ideal and resistive tokamak equilibria in the presence of ideal and resistive wall structures and coils. For these computations ideal equilibria represented by concentric nested flux surfaces serve as input (e.g. computed with the NEMEC code). Solving an extended eigenvalue problem, the CASTOR3D code takes simultaneously plasma inertia and wall resistivity into account. The code is a hybrid of the CASTOR_3DW stability code and the STARWALL code. The former is an extended version of the CASTOR and CASTOR_FLOW code, respectively. The latter is a linear 3D code computing the growth rates of resistive wall modes in the presence of multiply-connected wall structures. The CASTOR_3DW code, and some parts of the STARWALL code have been reformulated in a general 3D flux coordinate representation that allows to choose between various types of flux coordinates. Furthermore, the implemented many-valued current potentials in the STARWALL part allow a correct treatment of the m  =  0, n  =  0 perturbation. In this paper, we outline the theoretical concept, and present some numerical results which illustrate the present status of the code and demonstrate its numerous application possibilities.

  5. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  6. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  7. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  8. Two Eyes, 3D: Stereoscopic Design Principles

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  9. Emerging Applications of Bedside 3D Printing in Plastic Surgery

    PubMed Central

    Chae, Michael P.; Rozen, Warren M.; McMenamin, Paul G.; Findlay, Michael W.; Spychal, Robert T.; Hunter-Smith, David J.

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  10. Emerging Applications of Bedside 3D Printing in Plastic Surgery.

    PubMed

    Chae, Michael P; Rozen, Warren M; McMenamin, Paul G; Findlay, Michael W; Spychal, Robert T; Hunter-Smith, David J

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  11. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  12. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  13. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  14. 3D polymer scaffold arrays.

    PubMed

    Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik

    2011-01-01

    We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.

  15. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  16. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  17. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  18. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  19. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  1. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  2. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  3. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  4. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  5. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  6. 3-D world modeling for an autonomous robot

    SciTech Connect

    Goldstein, M.; Pin, F.G.; Weisbin, C.R.

    1987-08-01

    This paper presents a methodology for a concise representation of the 3-D world model for a mobile robot, using range data. The process starts with the segmentation of the scene into ''objects'' that are given a unique label, based on principles of range continuity. Then the external surface of each object is partitioned into homogeneous surface patches. Contours of surface patches in 3-D space are identified by estimating the normal and curvature associated with each pixel. The resulting surface patches are then classified as planar, convex or concave. Since the world model uses a volumetric representation for the 3-D environment, planar surfaces are represented by thin volumetric polyhedra. Spherical and cylindrical surfaces are extracted and represented by appropriate volumetric primitives. All other surfaces are represented using the boolean union of spherical volumes (as described in a separate paper by the same authors). The result is a general, concise representation of the external 3-D world, which allows for efficient and robust 3-D object recognition. 20 refs., 14 figs.

  7. Communicating Experience of 3D Space: Mathematical and Everyday Discourse

    ERIC Educational Resources Information Center

    Morgan, Candia; Alshwaikh, Jehad

    2012-01-01

    In this article we consider data arising from student-teacher-researcher interactions taking place in the context of an experimental teaching program making use of multiple modes of communication and representation to explore three-dimensional (3D) shape. As teachers/researchers attempted to support student use of a logo-like formal language for…

  8. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  9. Teaching 3-D Geometry--The Multi Representational Way

    ERIC Educational Resources Information Center

    Kalbitzer, Sonja; Loong, Esther

    2013-01-01

    Many students have difficulties in geometric and spatial thinking (see Pittalis & Christou, 2010). Students who are asked to construct models of geometric thought not previously learnt may be forced into rote learning and only gain temporary or superficial success (Van de Walle & Folk, 2008, p. 431). Therefore it is imperative for…

  10. Macrophage podosomes go 3D.

    PubMed

    Van Goethem, Emeline; Guiet, Romain; Balor, Stéphanie; Charrière, Guillaume M; Poincloux, Renaud; Labrousse, Arnaud; Maridonneau-Parini, Isabelle; Le Cabec, Véronique

    2011-01-01

    Macrophage tissue infiltration is a critical step in the immune response against microorganisms and is also associated with disease progression in chronic inflammation and cancer. Macrophages are constitutively equipped with specialized structures called podosomes dedicated to extracellular matrix (ECM) degradation. We recently reported that these structures play a critical role in trans-matrix mesenchymal migration mode, a protease-dependent mechanism. Podosome molecular components and their ECM-degrading activity have been extensively studied in two dimensions (2D), but yet very little is known about their fate in three-dimensional (3D) environments. Therefore, localization of podosome markers and proteolytic activity were carefully examined in human macrophages performing mesenchymal migration. Using our gelled collagen I 3D matrix model to obligate human macrophages to perform mesenchymal migration, classical podosome markers including talin, paxillin, vinculin, gelsolin, cortactin were found to accumulate at the tip of F-actin-rich cell protrusions together with β1 integrin and CD44 but not β2 integrin. Macrophage proteolytic activity was observed at podosome-like protrusion sites using confocal fluorescence microscopy and electron microscopy. The formation of migration tunnels by macrophages inside the matrix was accomplished by degradation, engulfment and mechanic compaction of the matrix. In addition, videomicroscopy revealed that 3D F-actin-rich protrusions of migrating macrophages were as dynamic as their 2D counterparts. Overall, the specifications of 3D podosomes resembled those of 2D podosome rosettes rather than those of individual podosomes. This observation was further supported by the aspect of 3D podosomes in fibroblasts expressing Hck, a master regulator of podosome rosettes in macrophages. In conclusion, human macrophage podosomes go 3D and take the shape of spherical podosome rosettes when the cells perform mesenchymal migration. This work

  11. 3D Printed Bionic Nanodevices.

    PubMed

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  12. Petal, terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The metallic object at lower right is part of the lander's low-gain antenna. This image is part of a 3D 'monster

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  13. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  14. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  15. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  16. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  17. Methodology of the determination of the uncertainties by using the biometric device the broadway 3D

    NASA Astrophysics Data System (ADS)

    Jasek, Roman; Talandova, Hana; Adamek, Milan

    2016-06-01

    The biometric identification by face is among one of the most widely used methods of biometric identification. Due to it provides a faster and more accurate identification; it was implemented into area of security 3D face reader by Broadway manufacturer was used to measure. It is equipped with the 3D camera system, which uses the method of structured light scanning and saves the template into the 3D model of face. The obtained data were evaluated by software Turnstile Enrolment Application (TEA). The measurements were used 3D face reader the Broadway 3D. First, the person was scanned and stored in the database. Thereafter person has already been compared with the stored template in the database for each method. Finally, a measure of reliability was evaluated for the Broadway 3D face reader.

  18. A Prototype Digital Library for 3D Collections: Tools To Capture, Model, Analyze, and Query Complex 3D Data.

    ERIC Educational Resources Information Center

    Rowe, Jeremy; Razdan, Anshuman

    The Partnership for Research in Spatial Modeling (PRISM) project at Arizona State University (ASU) developed modeling and analytic tools to respond to the limitations of two-dimensional (2D) data representations perceived by affiliated discipline scientists, and to take advantage of the enhanced capabilities of three-dimensional (3D) data that…

  19. From Surface Data to 3D Geologic Maps

    NASA Astrophysics Data System (ADS)

    Dhont, D.; Luxey, P.; Longuesserre, V.; Monod, B.; Guillaume, B.

    2008-12-01

    New trends in earth sciences are mostly related to technologies allowing graphical representations of the geology in 3D. However, the concept of 3D geologic map is commonly misused. For instance, displays of geologic maps draped onto DEM in rotating perspective views have been misleadingly called 3D geologic maps, but this still cannot provide any volumetric underground information as a true 3D geologic map should. Here, we present a way to produce mathematically and geometrically correct 3D geologic maps constituted by the volume and shape of all geologic features of a given area. The originality of the method is that it is based on the integration of surface data only consisting of (1) geologic maps, (2) satellite images, (3) DEM and (4) bedding dips and strikes. To generate 3D geologic maps, we used a 3D geologic modeler that combines and extrapolates the surface information into a coherent 3D data set. The significance of geometrically correct 3D geologic maps is demonstrated for various geologic settings and applications. 3D models are of primarily importance for educational purposes because they reveal features that standard 2D geologic maps by themselves could not show. The 3D visualization helps in the understanding of the geometrical relationship between the different geologic features and, in turn, for the quantification of the geology at the regional scale. Furthermore, given the logistical challenges associated with modern oil and mineral exploration in remote and rugged terrain, these volume-based models can provide geological and commercial insight prior to seismic evaluation.

  20. The World of 3-D.

    ERIC Educational Resources Information Center

    Mayshark, Robin K.

    1991-01-01

    Students explore three-dimensional properties by creating red and green wall decorations related to Christmas. Students examine why images seem to vibrate when red and green pieces are small and close together. Instructions to conduct the activity and construct 3-D glasses are given. (MDH)

  1. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  2. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  3. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  4. Grid cells in 3-D: Reconciling data and models.

    PubMed

    Horiuchi, Timothy K; Moss, Cynthia F

    2015-12-01

    It is well documented that place cells and grid cells in echolocating bats show properties similar to those described in rodents, and yet, continuous theta-frequency oscillations, proposed to play a central role in grid/place cell formation, are not present in bat recordings. These comparative neurophysiological data have raised many questions about the role of theta-frequency oscillations in spatial memory and navigation. Additionally, spatial navigation in three-dimensions poses new challenges for the representation of space in neural models. Inspired by the literature on space representation in the echolocating bat, we have developed a nonoscillatory model of 3-D grid cell creation that shares many of the features of existing oscillatory-interference models. We discuss the model in the context of current knowledge of 3-D space representation and highlight directions for future research.

  5. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  6. Slope instability in complex 3D topography promoted by convergent 3D groundwater flow

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Brien, D. L.

    2012-12-01

    Slope instability in complex topography is generally controlled by the interaction between gravitationally induced stresses, 3D strengths, and 3D pore-fluid pressure fields produced by flowing groundwater. As an example of this complexity, coastal bluffs sculpted by landsliding commonly exhibit a progression of undulating headlands and re-entrants. In this landscape, stresses differ between headlands and re-entrants and 3D groundwater flow varies from vertical rainfall infiltration to lateral groundwater flow on lower permeability layers with subsequent discharge at the curved bluff faces. In plan view, groundwater flow converges in the re-entrant regions. To investigate relative slope instability induced by undulating topography, we couple the USGS 3D limit-equilibrium slope-stability model, SCOOPS, with the USGS 3D groundwater flow model, MODFLOW. By rapidly analyzing the stability of millions of potential failures, the SCOOPS model can determine relative slope stability throughout the 3D domain underlying a digital elevation model (DEM), and it can utilize both fully 3D distributions of pore-water pressure and material strength. The two models are linked by first computing a groundwater-flow field in MODFLOW, and then computing stability in SCOOPS using the pore-pressure field derived from groundwater flow. Using these two models, our analyses of 60m high coastal bluffs in Seattle, Washington showed augmented instability in topographic re-entrants given recharge from a rainy season. Here, increased recharge led to elevated perched water tables with enhanced effects in the re-entrants owing to convergence of groundwater flow. Stability in these areas was reduced about 80% compared to equivalent dry conditions. To further isolate these effects, we examined groundwater flow and stability in hypothetical landscapes composed of uniform and equally spaced, oscillating headlands and re-entrants with differing amplitudes. The landscapes had a constant slope for both

  7. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  8. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  9. Comparing swimsuits in 3D.

    PubMed

    van Geer, Erik; Molenbroek, Johan; Schreven, Sander; deVoogd-Claessen, Lenneke; Toussaint, Huib

    2012-01-01

    In competitive swimming, suits have become more important. These suits influence friction, pressure and wave drag. Friction drag is related to the surface properties whereas both pressure and wave drag are greatly influenced by body shape. To find a relationship between the body shape and the drag, the anthropometry of several world class female swimmers wearing different suits was accurately defined using a 3D scanner and traditional measuring methods. The 3D scans delivered more detailed information about the body shape. On the same day the swimmers did performance tests in the water with the tested suits. Afterwards the result of the performance tests and the differences found in body shape was analyzed to determine the deformation caused by a swimsuit and its effect on the swimming performance. Although the amount of data is limited because of the few test subjects, there is an indication that the deformation of the body influences the swimming performance.

  10. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  11. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  12. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  13. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  14. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  15. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  16. Evaluating scatterometry 3D capabilities for EUV

    NASA Astrophysics Data System (ADS)

    Li, Jie; Kritsun, Oleg; Dasari, Prasad; Volkman, Catherine; Wallow, Tom; Hu, Jiangtao

    2013-04-01

    Optical critical dimension (OCD) metrology using scatterometry has been demonstrated to be a viable solution for fast and non-destructive in-line process control and monitoring. As extreme ultraviolet lithography (EUVL) is more widely adopted to fabricate smaller and smaller patterns for electronic devices, scatterometry faces new challenges due to several reasons. For 14nm node and beyond, the feature size is nearly an order of magnitude smaller than the shortest wavelength used in scatterometry. In addition, thinner resist layer is used in EUVL compared with conventional lithography, which leads to reduced measurement sensitivity. Despite these difficulties, tolerance has reduced for smaller feature size. In this work we evaluate 3D capability of scatterometry for EUV process using spectroscopic ellipsometry (SE). Three types of structures, contact holes, tip-to-tip, and tip-to-edge, are studied to test CD and end-gap metrology capabilities. The wafer is processed with focus and exposure matrix. Good correlations to CD-SEM results are achieved and good dynamic precision is obtained for all the key parameters. In addition, the fit to process provides an independent method to evaluate data quality from different metrology tools such as OCD and CDSEM. We demonstrate 3D capabilities of scatterometry OCD metrology for EUVL using spectroscopic ellipsometry, which provides valuable in-line metrology for CD and end-gap control in electronic circuit fabrications.

  17. 3D printed rapid disaster response

    NASA Astrophysics Data System (ADS)

    Lacaze, Alberto; Murphy, Karl; Mottern, Edward; Corley, Katrina; Chu, Kai-Dee

    2014-05-01

    Under the Department of Homeland Security-sponsored Sensor-smart Affordable Autonomous Robotic Platforms (SAARP) project, Robotic Research, LLC is developing an affordable and adaptable method to provide disaster response robots developed with 3D printer technology. The SAARP Store contains a library of robots, a developer storefront, and a user storefront. The SAARP Store allows the user to select, print, assemble, and operate the robot. In addition to the SAARP Store, two platforms are currently being developed. They use a set of common non-printed components that will allow the later design of other platforms that share non-printed components. During disasters, new challenges are faced that require customized tools or platforms. Instead of prebuilt and prepositioned supplies, a library of validated robots will be catalogued to satisfy various challenges at the scene. 3D printing components will allow these customized tools to be deployed in a fraction of the time that would normally be required. While the current system is focused on supporting disaster response personnel, this system will be expandable to a range of customers, including domestic law enforcement, the armed services, universities, and research facilities.

  18. Effect of photographic negation on face expression aftereffects.

    PubMed

    Benton, Christopher P

    2009-01-01

    Our visual representation of facial expression is examined in this study: is this representation built from edge information, or does it incorporate surface-based information? To answer this question, photographic negation of grey-scale images is used. Negation preserves edge information whilst disrupting the surface-based information. In two experiments visual aftereffects produced by prolonged viewing of images of facial expressions were measured. This adaptation-based technique allows a behavioural assessment of the characteristics encoded by the neural systems underlying our representation of facial expression. The experiments show that photographic negation of the adapting images results in a profound decrease of expression aftereffect. Our visual representation of facial expression therefore appears to not just be built from edge information, but to also incorporate surface information. The latter allows an appreciation of the 3-D structure of the expressing face that, it is argued, may underpin the subtlety and range of our non-verbal facial communication.

  19. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  20. 3D Nanostructuring of Semiconductors

    NASA Astrophysics Data System (ADS)

    Blick, Robert

    2000-03-01

    Modern semiconductor technology allows to machine devices on the nanometer scale. I will discuss the current limits of the fabrication processes, which enable the definition of single electron transistors with dimensions down to 8 nm. In addition to the conventional 2D patterning and structuring of semiconductors, I will demonstrate how to apply 3D nanostructuring techniques to build freely suspended single-crystal beams with lateral dimension down to 20 nm. In transport measurements in the temperature range from 30 mK up to 100 K these nano-crystals are characterized regarding their electronic as well as their mechanical properties. Moreover, I will present possible applications of these devices.

  1. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  2. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  3. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  4. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  5. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  6. Towards Contactless, Low-Cost and Accurate 3D Fingerprint Identification.

    PubMed

    Kumar, Ajay; Kwong, Cyril

    2015-03-01

    Human identification using fingerprint impressions has been widely studied and employed for more than 2000 years. Despite new advancements in the 3D imaging technologies, widely accepted representation of 3D fingerprint features and matching methodology is yet to emerge. This paper investigates 3D representation of widely employed 2D minutiae features by recovering and incorporating (i) minutiae height z and (ii) its 3D orientation φ information and illustrates an effective matching strategy for matching popular minutiae features extended in 3D space. One of the obstacles of the emerging 3D fingerprint identification systems to replace the conventional 2D fingerprint system lies in their bulk and high cost, which is mainly contributed from the usage of structured lighting system or multiple cameras. This paper attempts to addresses such key limitations of the current 3D fingerprint technologies bydeveloping the single camera-based 3D fingerprint identification system. We develop a generalized 3D minutiae matching model and recover extended 3D fingerprint features from the reconstructed 3D fingerprints. 2D fingerprint images acquired for the 3D fingerprint reconstruction can themselves be employed for the performance improvement and have been illustrated in the work detailed in this paper. This paper also attempts to answer one of the most fundamental questions on the availability of inherent discriminable information from 3D fingerprints. The experimental results are presented on a database of 240 clients 3D fingerprints, which is made publicly available to further research efforts in this area, and illustrate the discriminant power of 3D minutiae representation and matching to achieve performance improvement.

  7. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  8. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  9. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  10. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  11. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at lower left in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  12. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  13. 3D structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Dougherty, William M.; Goodwin, Paul C.

    2011-03-01

    Three-dimensional structured illumination microscopy achieves double the lateral and axial resolution of wide-field microscopy, using conventional fluorescent dyes, proteins and sample preparation techniques. A three-dimensional interference-fringe pattern excites the fluorescence, filling in the "missing cone" of the wide field optical transfer function, thereby enabling axial (z) discrimination. The pattern acts as a spatial carrier frequency that mixes with the higher spatial frequency components of the image, which usually succumb to the diffraction limit. The fluorescence image encodes the high frequency content as a down-mixed, moiré-like pattern. A series of images is required, wherein the 3D pattern is shifted and rotated, providing down-mixed data for a system of linear equations. Super-resolution is obtained by solving these equations. The speed with which the image series can be obtained can be a problem for the microscopy of living cells. Challenges include pattern-switching speeds, optical efficiency, wavefront quality and fringe contrast, fringe pitch optimization, and polarization issues. We will review some recent developments in 3D-SIM hardware with the goal of super-resolved z-stacks of motile cells.

  14. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  15. A probabilistic approach to realistic face synthesis with a single uncalibrated image.

    PubMed

    Shim, Hyunjung

    2012-08-01

    This paper presents a novel approach to automatic face modeling for realistic synthesis from an unknown face image, using a probabilistic face diffuse model and a generic face specular map. We construct a probabilistic face diffuse model for estimating the albedo and normals of the input face. Then, we develop a generic face specular map for estimating the specularity of face. Using the estimated albedo, normal and specular information, we can synthesize the face under arbitrary lighting and viewing directions realistically. Unlike many existing techniques, our approach can extract both the diffuse and specular information of face without involving an intensive 3D matching procedure. We conduct three different experiments to show our improvement over the prior art. First, we compare the proposed algorithm with previous techniques, including the state of the art, to demonstrate our achievement in realistic face synthesis. Moreover, we evaluate the proposed algorithm over non-automatic face modeling techniques through a subjective study. This evaluation is meaningful in that it tells us how far the proposed algorithm as well as others are from the real photograph in terms of the perceptual quality. Finally, we apply our face model for improving the face recognition performance under varying illumination conditions and show that the proposed algorithm is effective to enhance the face recognition rate. Thanks to the compact representation and the effective inference scheme, our technique is applicable for many practical applications, such as avatar creation, digital face cloning, face normalization, deidentification and many others.

  16. Emergence of 3D Printed Dosage Forms: Opportunities and Challenges.

    PubMed

    Alhnan, Mohamed A; Okwuosa, Tochukwu C; Sadia, Muzna; Wan, Ka-Wai; Ahmed, Waqar; Arafat, Basel

    2016-08-01

    The recent introduction of the first FDA approved 3D-printed drug has fuelled interest in 3D printing technology, which is set to revolutionize healthcare. Since its initial use, this rapid prototyping (RP) technology has evolved to such an extent that it is currently being used in a wide range of applications including in tissue engineering, dentistry, construction, automotive and aerospace. However, in the pharmaceutical industry this technology is still in its infancy and its potential yet to be fully explored. This paper presents various 3D printing technologies such as stereolithographic, powder based, selective laser sintering, fused deposition modelling and semi-solid extrusion 3D printing. It also provides a comprehensive review of previous attempts at using 3D printing technologies on the manufacturing dosage forms with a particular focus on oral tablets. Their advantages particularly with adaptability in the pharmaceutical field have been highlighted, which enables the preparation of dosage forms with complex designs and geometries, multiple actives and tailored release profiles. An insight into the technical challenges facing the different 3D printing technologies such as the formulation and processing parameters is provided. Light is also shed on the different regulatory challenges that need to be overcome for 3D printing to fulfil its real potential in the pharmaceutical industry.

  17. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0

  18. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  19. Planning 3-D collision-free paths using spheres

    NASA Technical Reports Server (NTRS)

    Bonner, Susan; Kelley, Robert B.

    1989-01-01

    A scheme for the representation of objects, the Successive Spherical Approximation (SSA), facilitates the rapid planning of collision-free paths in a 3-D, dynamic environment. The hierarchical nature of the SSA allows collision-free paths to be determined efficiently while still providing for the exact representation of dynamic objects. The concept of a freespace cell is introduced to allow human 3-D conceptual knowledge to be used in facilitating satisfying choices for paths. Collisions can be detected at a rate better than 1 second per environment object per path. This speed enables the path planning process to apply a hierarchy of rules to create a heuristically satisfying collision-free path.

  20. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-06

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  1. 3D holographic portraits: presence and absence

    NASA Astrophysics Data System (ADS)

    Oliveria, Rosa M.; Bernardo, Luís Miguel

    2011-02-01

    Authors writing about the portrait insist on the status of extending the model image portrayed beyond the absence and even death. The portrait also has this ability and suggests immortality. The picture suspends the time, making the absent present. The portrait has been, over time, one of the themes mostly used in art. No wonder that in holography it is an important subject as well. The face is a body area of privileged communication and expression. It expresses emotions through looks, smiles, movements and expressions. Being Holography, so far, the recording technology that represents the object most similar to the original, with the same parallax, we may fall into a mimetic representation of reality. On Art Holography even by following paths already traversed, the resulting holograms are always different because of the unique concept that each artist-holographer puts into his work. As with any other artistic technology, each artist uses the medium differently and with different results.

  2. Quasi 3D dispersion experiment

    NASA Astrophysics Data System (ADS)

    Bakucz, P.

    2003-04-01

    This paper studies the problem of tracer dispersion in a coloured fluid flowing through a two-phase 3D rough channel-system in a 40 cm*40 cm plexi-container filled by homogen glass fractions and colourless fluid. The unstable interface between the driving coloured fluid and the colourless fluid develops viscous fingers with a fractal structure at high capillary number. Five two-dimensional fractal fronts have been observed at the same time using four cameras along the vertical side-walls and using one camera located above the plexi-container. In possession of five fronts the spatial concentration contours are determined using statistical models. The concentration contours are self-affine fractal curves with a fractal dimension D=2.19. This result is valid for disperison at high Péclet numbers.

  3. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  4. 3D Printed Shelby Cobra

    ScienceCinema

    Love, Lonnie

    2016-11-02

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  5. The 3D Elevation Program: summary for Alaska

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    Coordination by SDMI and AMEC avoids duplication of effort and ensures a unified approach to consistent, statewide data acquisition; the enhancement of existing data; and support for emerging applications. The 3D Elevation Program (3DEP) initiative, managed by the U.S. Geological Survey (USGS), responds to the growing need for high-quality topographic data and a wide range of other three-dimensional representations of the Nation’s natural and constructed features.

  6. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  7. Postprocessing of compressed 3D graphic data by using subdivision

    NASA Astrophysics Data System (ADS)

    Cheang, Ka Man; Li, Jiankun; Kuo, C.-C. Jay

    1998-10-01

    In this work, we present a postprocessing technique applied to a 3D graphic model of a lower resolution to obtain a visually more pleasant representation. Our method is an improved version of the Butterfly subdivision scheme developed by Zorin et al. Our main contribution is to exploit the flatness information of local areas of a 3D graphic model for adaptive refinement. Consequently, we can avoid unnecessary subdivision in regions which are relatively flat. The proposed new algorithm not only reduces the computational complexity but also saves the storage space. With the hierarchical mesh compression method developed by Li and Kuo as the baseline coding method, we show that the postprocessing technique can greatly improve the visual quality of the decoded 3D graphic model.

  8. 3D Printing and Digital Rock Physics for Geomaterials

    NASA Astrophysics Data System (ADS)

    Martinez, M. J.; Yoon, H.; Dewers, T. A.

    2015-12-01

    Imaging techniques for the analysis of porous structures have revolutionized our ability to quantitatively characterize geomaterials. Digital representations of rock from CT images and physics modeling based on these pore structures provide the opportunity to further advance our quantitative understanding of fluid flow, geomechanics, and geochemistry, and the emergence of coupled behaviors. Additive manufacturing, commonly known as 3D printing, has revolutionized production of custom parts with complex internal geometries. For the geosciences, recent advances in 3D printing technology may be co-opted to print reproducible porous structures derived from CT-imaging of actual rocks for experimental testing. The use of 3D printed microstructure allows us to surmount typical problems associated with sample-to-sample heterogeneity that plague rock physics testing and to test material response independent from pore-structure variability. Together, imaging, digital rocks and 3D printing potentially enables a new workflow for understanding coupled geophysical processes in a real, but well-defined setting circumventing typical issues associated with reproducibility, enabling full characterization and thus connection of physical phenomena to structure. In this talk we will discuss the possibilities that these technologies can bring to geosciences and present early experiences with coupled multiscale experimental and numerical analysis using 3D printed fractured rock specimens. In particular, we discuss the processes of selection and printing of transparent fractured specimens based on 3D reconstruction of micro-fractured rock to study fluid flow characterization and manipulation. Micro-particle image velocimetry is used to directly visualize 3D single and multiphase flow velocity in 3D fracture networks. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U

  9. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  10. A 3D Contact Smoothing Method

    SciTech Connect

    Puso, M A; Laursen, T A

    2002-05-02

    Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.

  11. 3D View of Mars Particle

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point.

    The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil.

    The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer.

    The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  13. 3-D negotiation. Playing the whole game.

    PubMed

    Lax, David A; Sebenius, James K

    2003-11-01

    What stands between you and the yes you want? According to negotiation experts David Lax and James Sebenius, executives face obstacles in three common and complementary dimensions. The first dimension is tactics, or interactions at the bargaining table. The second is deal design, or the ability to draw up a deal at the table that creates lasting value. And the third is setup, which includes the structure of the negotiation itself. Each dimension is crucial in the bargaining process, but most executives fixate on only the first two: 1-D negotiators focus on improving their interpersonal skills at the negotiating table--courting their clients, using culturally sensitive language, and so on. 2-D negotiators focus on diagnosing underlying sources of value in a deal and then recrafting the terms to satisfy all parties. In this article, the authors explore the often-neglected third dimension. Instead of just playing the game at the bargaining table, 3-D negotiators reshape the scope and sequence of the game itself to achieve the desired outcome. They scan widely to identify elements outside of the deal on the table that might create a more favorable structure for it. They map backward from their ideal resolution to the current setup of the deal and carefully choose which players to approach and when. And they manage and frame the flow of information among the parties involved to improve their odds of getting to yes. Lax and Sebenius describe the tactics 3-D negotiators use--such as bringing new, previously unconsidered players into a negotiation--and cite examples from business and foreign affairs. Negotiators need to act in all three dimensions, the authors argue, to create and claim value for the long term.

  14. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  15. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  16. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  17. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation and computational complexity) with a spatial alignment procedure which leads to a geometric invariant behavior. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a categorized ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score and compared to those obtained by applying the MPEg-7 3D SD. It is shown that the O3DHTD outperforms the MPEg-7 3D SD of up to 28%.

  18. 3D Printing of Biomolecular Models for Research and Pedagogy.

    PubMed

    Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel

    2017-03-13

    The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology.

  19. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  20. [3D emulation of epicardium dynamic mapping].

    PubMed

    Lu, Jun; Yang, Cui-Wei; Fang, Zu-Xiang

    2005-03-01

    In order to realize epicardium dynamic mapping of the whole atria, 3-D graphics are drawn with OpenGL. Some source codes are introduced in the paper to explain how to produce, read, and manipulate 3-D model data.

  1. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  2. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  3. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  4. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  5. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  6. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  7. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  8. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  9. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  10. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  11. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  12. 3D toroidal physics: testing the boundaries of symmetry breaking

    NASA Astrophysics Data System (ADS)

    Spong, Don

    2014-10-01

    Toroidal symmetry is an important concept for plasma confinement; it allows the existence of nested flux surface MHD equilibria and conserved invariants for particle motion. However, perfect symmetry is unachievable in realistic toroidal plasma devices. For example, tokamaks have toroidal ripple due to discrete field coils, optimized stellarators do not achieve exact quasi-symmetry, the plasma itself continually seeks lower energy states through helical 3D deformations, and reactors will likely have non-uniform distributions of ferritic steel near the plasma. Also, some level of designed-in 3D magnetic field structure is now anticipated for most concepts in order to lead to a stable, steady-state fusion reactor. Such planned 3D field structures can take many forms, ranging from tokamaks with weak 3D ELM-suppression fields to stellarators with more dominant 3D field structures. There is considerable interest in the development of unified physics models for the full range of 3D effects. Ultimately, the questions of how much symmetry breaking can be tolerated and how to optimize its design must be addressed for all fusion concepts. Fortunately, significant progress is underway in theory, computation and plasma diagnostics on many issues such as magnetic surface quality, plasma screening vs. amplification of 3D perturbations, 3D transport, influence on edge pedestal structures, MHD stability effects, modification of fast ion-driven instabilities, prediction of energetic particle heat loads on plasma-facing materials, effects of 3D fields on turbulence, and magnetic coil design. A closely coupled program of simulation, experimental validation, and design optimization is required to determine what forms and amplitudes of 3D shaping and symmetry breaking will be compatible with future fusion reactors. The development of models to address 3D physics and progress in these areas will be described. This work is supported both by the US Department of Energy under Contract DE

  13. Developing 3D SEM in a broad biological context

    PubMed Central

    Kremer, A; Lippens, S; Bartunkova, S; Asselbergh, B; Blanpain, C; Fendrych, M; Goossens, A; Holt, M; Janssens, S; Krols, M; Larsimont, J-C; Mc Guire, C; Nowack, MK; Saelens, X; Schertel, A; Schepens, B; Slezak, M; Timmerman, V; Theunis, C; Van Brempt, R; Visser, Y; GuÉRin, CJ

    2015-01-01

    When electron microscopy (EM) was introduced in the 1930s it gave scientists their first look into the nanoworld of cells. Over the last 80 years EM has vastly increased our understanding of the complex cellular structures that underlie the diverse functions that cells need to maintain life. One drawback that has been difficult to overcome was the inherent lack of volume information, mainly due to the limit on the thickness of sections that could be viewed in a transmission electron microscope (TEM). For many years scientists struggled to achieve three-dimensional (3D) EM using serial section reconstructions, TEM tomography, and scanning EM (SEM) techniques such as freeze-fracture. Although each technique yielded some special information, they required a significant amount of time and specialist expertise to obtain even a very small 3D EM dataset. Almost 20 years ago scientists began to exploit SEMs to image blocks of embedded tissues and perform serial sectioning of these tissues inside the SEM chamber. Using first focused ion beams (FIB) and subsequently robotic ultramicrotomes (serial block-face, SBF-SEM) microscopists were able to collect large volumes of 3D EM information at resolutions that could address many important biological questions, and do so in an efficient manner. We present here some examples of 3D EM taken from the many diverse specimens that have been imaged in our core facility. We propose that the next major step forward will be to efficiently correlate functional information obtained using light microscopy (LM) with 3D EM datasets to more completely investigate the important links between cell structures and their functions. Lay Description Life happens in three dimensions. For many years, first light, and then EM struggled to image the smallest parts of cells in 3D. With recent advances in technology and corresponding improvements in computing, scientists can now see the 3D world of the cell at the nanoscale. In this paper we present the

  14. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  15. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  16. Mini 3D for shallow gas reconnaissance

    SciTech Connect

    Vallieres, T. des; Enns, D.; Kuehn, H.; Parron, D.; Lafet, Y.; Van Hulle, D.

    1996-12-31

    The Mini 3D project was undertaken by TOTAL and ELF with the support of CEPM (Comite d`Etudes Petrolieres et Marines) to define an economical method of obtaining 3D seismic HR data for shallow gas assessment. An experimental 3D survey was carried out with classical site survey techniques in the North Sea. From these data 19 simulations, were produced to compare different acquisition geometries ranging from dual, 600 m long cables to a single receiver. Results show that short offset, low fold and very simple streamer positioning are sufficient to give a reliable 3D image of gas charged bodies. The 3D data allow a much more accurate risk delineation than 2D HR data. Moreover on financial grounds Mini-3D is comparable in cost to a classical HR 2D survey. In view of these results, such HR 3D should now be the standard for shallow gas surveying.

  17. FlexyDos3D: a deformable anthropomorphic 3D radiation dosimeter: radiation properties

    NASA Astrophysics Data System (ADS)

    De Deene, Y.; Skyt, P. S.; Hil, R.; Booth, J. T.

    2015-02-01

    Three dimensional radiation dosimetry has received growing interest with the implementation of highly conformal radiotherapy treatments. The radiotherapy community faces new challenges with the commissioning of image guided and image gated radiotherapy treatments (IGRT) and deformable image registration software. A new three dimensional anthropomorphically shaped flexible dosimeter, further called ‘FlexyDos3D’, has been constructed and a new fast optical scanning method has been implemented that enables scanning of irregular shaped dosimeters. The FlexyDos3D phantom can be actuated and deformed during the actual treatment. FlexyDos3D offers the additional advantage that it is easy to fabricate, is non-toxic and can be molded in an arbitrary shape with high geometrical precision. The dosimeter formulation has been optimized in terms of dose sensitivity. The influence of the casting material and oxygen concentration has also been investigated. The radiophysical properties of this new dosimeter are discussed including stability, spatial integrity, temperature dependence of the dosimeter during radiation, readout and storage, dose rate dependence and tissue equivalence. The first authors Y De Deene and P S Skyt made an equivalent contribution to the experimental work presented in this paper.

  18. Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.

    PubMed

    Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S

    2015-01-01

    Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons.

  19. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  20. Graphical representations of adolescents' psychophysiological reactivity to social stressor tasks: Reliability and validity of the Chernoff Face approach and person-centered profiles for clinical use.

    PubMed

    De Los Reyes, Andres; Aldao, Amelia; Qasmieh, Noor; Dunn, Emily J; Lipton, Melanie F; Hartman, Catharina; Youngstrom, Eric A; Dougherty, Lea R; Lerner, Matthew D

    2017-04-01

    Low-cost methods exist for measuring physiology when clinically assessing adolescent social anxiety. Two barriers to widespread use involve lack of (a) physiological expertise among mental health professionals, and (b) techniques for modeling individual-level physiological profiles. We require a "bridge approach" for interpreting physiology that does not require users to have a physiological background to make judgments, and is amenable to developing individual-level physiological profiles. One method-Chernoff Faces-involves graphically representing data using human facial features (eyes, nose, mouth, face shape), thus capitalizing on humans' abilities to detect even subtle variations among facial features. We examined 327 adolescents from the Tracking Adolescents' Individual Lives Survey (TRAILS) study who completed baseline social anxiety self-reports and physiological assessments within the social scenarios of the Groningen Social Stressor Task (GSST). Using heart rate (HR) norms and Chernoff Faces, 2 naïve coders made judgments about graphically represented HR data and HR norms. For each adolescent, coders made 4 judgments about the features of 2 Chernoff Faces: (a) HR within the GSST and (b) aged-matched HR norms. Coders' judgments reliably and accurately identified elevated HR relative to norms. Using latent class analyses, we identified 3 profiles of Chernoff Face judgments: (a) consistently below HR norms across scenarios (n = 193); (b) above HR norms mainly when speech making (n = 35); or (c) consistently above HR norms across scenarios (n = 99). Chernoff Face judgments displayed validity evidence in relation to self-reported social anxiety and resting HR variability. This study has important implications for implementing physiology within adolescent social anxiety assessments. (PsycINFO Database Record

  1. Improving Nearest Neighbour Search in 3d Spatial Access Method

    NASA Astrophysics Data System (ADS)

    Suhaibaha, A.; Rahman, A. A.; Uznir, U.; Anton, F.; Mioc, D.

    2016-10-01

    Nearest Neighbour (NN) is one of the important queries and analyses for spatial application. In normal practice, spatial access method structure is used during the Nearest Neighbour query execution to retrieve information from the database. However, most of the spatial access method structures are still facing with unresolved issues such as overlapping among nodes and repetitive data entry. This situation will perform an excessive Input/Output (IO) operation which is inefficient for data retrieval. The situation will become more crucial while dealing with 3D data. The size of 3D data is usually large due to its detail geometry and other attached information. In this research, a clustered 3D hierarchical structure is introduced as a 3D spatial access method structure. The structure is expected to improve the retrieval of Nearest Neighbour information for 3D objects. Several tests are performed in answering Single Nearest Neighbour search and k Nearest Neighbour (kNN) search. The tests indicate that clustered hierarchical structure is efficient in handling Nearest Neighbour query compared to its competitor. From the results, clustered hierarchical structure reduced the repetitive data entry and the accessed page. The proposed structure also produced minimal Input/Output operation. The query response time is also outperformed compared to the other competitor. For future outlook of this research several possible applications are discussed and summarized.

  2. Optical fabrication of lightweighted 3D printed mirrors

    NASA Astrophysics Data System (ADS)

    Herzog, Harrison; Segal, Jacob; Smith, Jeremy; Bates, Richard; Calis, Jacob; De La Torre, Alyssa; Kim, Dae Wook; Mici, Joni; Mireles, Jorge; Stubbs, David M.; Wicker, Ryan

    2015-09-01

    Direct Metal Laser Sintering (DMLS) and Electron Beam Melting (EBM) 3D printing technologies were utilized to create lightweight, optical grade mirrors out of AlSi10Mg aluminum and Ti6Al4V titanium alloys at the University of Arizona in Tucson. The mirror prototypes were polished to meet the λ/20 RMS and λ/4 P-V surface figure requirements. The intent of this project was to design topologically optimized mirrors that had a high specific stiffness and low surface displacement. Two models were designed using Altair Inspire software, and the mirrors had to endure the polishing process with the necessary stiffness to eliminate print-through. Mitigating porosity of the 3D printed mirror blanks was a challenge in the face of reconciling new printing technologies with traditional optical polishing methods. The prototypes underwent Hot Isostatic Press (HIP) and heat treatment to improve density, eliminate porosity, and relieve internal stresses. Metal 3D printing allows for nearly unlimited topological constraints on design and virtually eliminates the need for a machine shop when creating an optical quality mirror. This research can lead to an increase in mirror mounting support complexity in the manufacturing of lightweight mirrors and improve overall process efficiency. The project aspired to have many future applications of light weighted 3D printed mirrors, such as spaceflight. This paper covers the design/fab/polish/test of 3D printed mirrors, thermal/structural finite element analysis, and results.

  3. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  4. 3-D Visualizations At (Almost) No Expense

    NASA Astrophysics Data System (ADS)

    Sedlock, R. L.

    2003-12-01

    Like most teaching-oriented public universities, San José State University (part of the California State University system) currently faces severe budgetary constraints. These circumstances prohibit the construction of one or more Geo-Walls on-campus. Nevertheless, the Department of Geology has pursued alternatives that enable our students to benefit from 3-D visualizations such as those used with the Geo-Wall. This experience - a sort of virtual virtuality - depends only on the availability of a computer lab and an optional plotter. Starting in June 2003, we have used the methods described here with two diverse groups of participants: middle- and high-school teachers taking professional development workshops through grants funded by NSF and NASA, and regular university students enrolled in introductory earth science and geology laboratory courses. We use two types of three-dimensional images with our students: visualizations from the on-line Gallery of Virtual Topography (Steve Reynolds), and USGS digital topographic quadrangles that have been transformed into anaglyph files for viewing with 3-D glasses. The procedure for transforming DEMs into these anaglyph files, developed by Paul Morin, is available at http://geosun.sjsu.edu/~sedlock/anaglyph.html. The resulting images can be used with students in one of two ways. First, maps can be printed on a suitable plotter, laminated (optional but preferable), and used repeatedly with different classes. Second, the images can be viewed in school computer labs or by students on their own computers. Chief advantages of the plotter option are (1) full-size maps (single or tiled) viewable in their entirety, and (2) dependability (independent of Internet connections and electrical power). Chief advantages of the computer option are (1) minimal preparation time and no other needed resources, assuming a computer lab with Internet access, and (2) students can work with the images outside of regularly scheduled courses. Both

  5. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  6. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  7. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  8. Potential of 3D City Models to assess flood vulnerability

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Bochow, Mathias; Schüttig, Martin; Nagel, Claus; Ross, Lutz; Kreibich, Heidi

    2016-04-01

    Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of

  9. Creating Realistic 3D Graphics with Excel at High School--Vector Algebra in Practice

    ERIC Educational Resources Information Center

    Benacka, Jan

    2015-01-01

    The article presents the results of an experiment in which Excel applications that depict rotatable and sizable orthographic projection of simple 3D figures with face overlapping were developed with thirty gymnasium (high school) students of age 17-19 as an introduction to 3D computer graphics. A questionnaire survey was conducted to find out…

  10. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  11. Comprehending 3D Diagrams: Sketching to Support Spatial Reasoning.

    PubMed

    Gagnier, Kristin M; Atit, Kinnari; Ormand, Carol J; Shipley, Thomas F

    2016-11-25

    Science, technology, engineering, and mathematics (STEM) disciplines commonly illustrate 3D relationships in diagrams, yet these are often challenging for students. Failing to understand diagrams can hinder success in STEM because scientific practice requires understanding and creating diagrammatic representations. We explore a new approach to improving student understanding of diagrams that convey 3D relations that is based on students generating their own predictive diagrams. Participants' comprehension of 3D spatial diagrams was measured in a pre- and post-design where students selected the correct 2D slice through 3D geologic block diagrams. Generating sketches that predicated the internal structure of a model led to greater improvement in diagram understanding than visualizing the interior of the model without sketching, or sketching the model without attempting to predict unseen spatial relations. In addition, we found a positive correlation between sketched diagram accuracy and improvement on the diagram comprehension measure. Results suggest that generating a predictive diagram facilitates students' abilities to make inferences about spatial relationships in diagrams. Implications for use of sketching in supporting STEM learning are discussed.

  12. Extra dimensions: 3d and time in pdf documentation

    NASA Astrophysics Data System (ADS)

    Graf, N. A.

    2008-07-01

    High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.

  13. A 3D digital map of rat brain.

    PubMed

    Toga, A W; Santori, E M; Hazani, R; Ambach, K

    1995-01-01

    A three dimensional (3D) computerized map of rat brain anatomy created with digital imaging techniques is described. Six male Sprague-Dawley rats, weighing 270-320 g, were used in the generation of this atlas. Their heads were frozen, and closely spaced cryosectional images were digitally captured. Each serial data set was organized into a digital volume, reoriented into a flat skull position, and brought into register with each other. A volume representative of the group following registration was chosen based on its anatomic correspondence with the other specimens as measured by image correlation coefficients and landmark matching. Mean positions of lambda, bregma, and the interaural plane of the group within the common coordinate system were used to transform the representative volume into a 3D map of rat neuroanatomy. images reconstructed from this 3D map are available to the public via Internet with an anonymous file transfer protocol (FTP) and World Wide Web. A complete description of the digital map is provided in a comprehensive set of sagittal planes (up to 0.031 mm spacing) containing stereotaxic reference grids. Sets of coronal and horizontal planes, resampled at the same increment, also are included. Specific anatomic features are identified in a second collection of images. Stylized anatomic boundaries and structural labels were incorporated into selected orthogonal planes. Electronic sharing and interactive use are benefits afforded by a digital format, but the foremost advantage of this 3D map is its whole brain integrated representation of rat in situ neuroanatomy.

  14. 3D imaging of the mesospheric emissive layer

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Faivre, Michael; Moreels, Guy; Clairemidi, Jacques; Mougin-Sisini, Davy; Meriwether, John W.; Lehmacher, Gerald A.; Vidal, Erick; Veliz, Oskar

    A new and original stereo-imaging method is introduced to measure the altitude of the OH airglow layer and provide a 3D map of the altitude of the layer centroid. Near-IR photographs of the layer are taken at two sites distant of 645 km. Each photograph is processed in order to invert the perspective effect and provide a satellite-type view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized crosscorrelation coefficient. This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12° 09' 08.2" S, 75° 33' 49.3" W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16° 33' 17.6" S, 71° 39' 59.4" W, altitude 2330 m) close to Arequipa. 3D maps of the layer surface are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 87.1 km on July 26 and 89.5 km on July 28. Comparable relief wavy features appear in the 3D and intensity maps.

  15. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  16. Extra Dimensions: 3D and Time in PDF Documentation

    SciTech Connect

    Graf, Norman A.; /SLAC

    2011-11-10

    High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.

  17. 3D measurement for rapid prototyping

    NASA Astrophysics Data System (ADS)

    Albrecht, Peter; Lilienblum, Tilo; Sommerkorn, Gerd; Michaelis, Bernd

    1996-08-01

    Optical 3-D measurement is an interesting approach for rapid prototyping. On one hand it's necessary to get the 3-D data of an object and on the other hand it's necessary to check the manufactured object (quality checking). Optical 3-D measurement can realize both. Classical 3-D measurement procedures based on photogrammetry cause systematic errors at strongly curved surfaces or steps in surfaces. One possibility to reduce these errors is to calculate the 3-D coordinates from several successively taken images. Thus it's possible to get higher spatial resolution and to reduce the systematic errors at 'problem surfaces.' Another possibility is to process the measurement values by neural networks. A modified associative memory smoothes and corrects the calculated 3-D coordinates using a-priori knowledge about the measurement object.

  18. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes.

  19. The 3D Elevation Program: summary for Oregon

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. The 3D Elevation Program: summary for Missouri

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  1. Nonstationary 3D motion of an elastic spherical shell

    NASA Astrophysics Data System (ADS)

    Tarlakovskii, D. V.; Fedotenkov, G. V.

    2015-03-01

    A 3D model of motion of a thin elastic spherical Timoshenko shell under the action of arbitrarily distributed nonstationary pressure is considered. An approach for splitting the system of equations of 3D motion of the shell is proposed. The integral representations of the solution with kernels in the form of influence functions, which can be determined analytically by using series expansions in the eigenfunctions and the Laplace transform, are constructed. An algorithm for solving the problem on the action of nonstationary normal pressure on the shell is constructed and implemented. The obtained results find practical use in aircraft and rocket construction and in many other industrial fields where thin-walled shell structural members under nonstationary working conditions are widely used.

  2. The 3D Elevation Program: summary for Arizona

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  3. The 3D Elevation Program: summary for Maryland

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  4. The 3D Elevation Program: summary for Alabama

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A-16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. The 3D Elevation Program: Summary for New Jersey

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  6. The 3D Elevation Program: summary for Colorado

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  7. The 3D Elevation Program: summary for New Hampshire

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  8. The 3D Elevation Program: summary for North Carolina

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the use community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  9. The 3D Elevation Program: summary for Georgia

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  10. The 3D Elevation Program: summary for South Dakota

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  11. The 3D Elevation Program: summary for Kentucky

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Arkansas

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  13. The 3D Elevation Program: summary for Nevada

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. The 3D Elevation Program: summary for Kansas

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  15. The 3D Elevation Program: summary for New Mexico

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 (table 1) for the conterminous United States and quality level 5 ifsar data (table 1) for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  16. The 3D Elevation Program: summary for Delaware

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  17. The 3D Elevation Program: summary for Montana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. The 3D Elevation Program: summary for North Dakota

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. The 3D Elevation Program: summary for Connecticut

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. The 3D Elevation Program: summary for Washington

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  1. The 3D Elevation Program: summary for New York

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  2. The 3D Elevation Program: summary for Oklahoma

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  3. The 3D Elevation Program: summary for Louisiana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  4. The 3D Elevation Program: summary for Illinois

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. The 3D Elevation Program: summary for West Virginia

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  6. The 3D Elevation Program: summary for Florida

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the OMB Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  7. The 3D Elevation Program: summary for Maine

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  8. The 3D Elevation Program: summary for Utah

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  9. The 3D Elevation Program: summary for South Carolina

    USGS Publications Warehouse

    Carswell, William

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  10. The 3D Elevation Program: summary for Ohio

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features.

  11. The 3D Elevation Program: summary for Massachusetts

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Mississippi

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  13. The 3D Elevation Program: summary for Tennessee

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. The 3D Elevation Program: summary for Indiana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features.

  15. A Nonlinear Modal Aeroelastic Solver for FUN3D

    NASA Technical Reports Server (NTRS)

    Goldman, Benjamin D.; Bartels, Robert E.; Biedron, Robert T.; Scott, Robert C.

    2016-01-01

    A nonlinear structural solver has been implemented internally within the NASA FUN3D computational fluid dynamics code, allowing for some new aeroelastic capabilities. Using a modal representation of the structure, a set of differential or differential-algebraic equations are derived for general thin structures with geometric nonlinearities. ODEPACK and LAPACK routines are linked with FUN3D, and the nonlinear equations are solved at each CFD time step. The existing predictor-corrector method is retained, whereby the structural solution is updated after mesh deformation. The nonlinear solver is validated using a test case for a flexible aeroshell at transonic, supersonic, and hypersonic flow conditions. Agreement with linear theory is seen for the static aeroelastic solutions at relatively low dynamic pressures, but structural nonlinearities limit deformation amplitudes at high dynamic pressures. No flutter was found at any of the tested trajectory points, though LCO may be possible in the transonic regime.

  16. Enhanced LOD Concepts for Virtual 3d City Models

    NASA Astrophysics Data System (ADS)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  17. The 3D Elevation Program: summary for Iowa

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. The 3D Elevation Program: summary for Wyoming

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. The 3D Elevation Program: summary for Pennsylvania

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. Photorefractive Polymers for Updateable 3D Displays

    DTIC Science & Technology

    2010-02-24

    Final Performance Report 3. DATES COVERED (From - To) 01-01-2007 to 11-30-2009 4. TITLE AND SUBTITLE Photorefractive Polymers for Updateable 3D ...ABSTRACT During the tenure of this project a large area updateable 3D color display has been developed for the first time using a new co-polymer...photorefractive polymers have been demonstrated. Moreover, a 6 inch × 6 inch sample was fabricated demonstrating the feasibility of making large area 3D

  1. 3D Microperfusion Model of ADPKD

    DTIC Science & Technology

    2015-10-01

    Stratasys 3D printer . PDMS was cast in the negative molds in order to create permanent biocompatible plastic masters (SmoothCast 310). All goals of task...1 AWARD NUMBER: W81XWH-14-1-0304 TITLE: 3D Microperfusion Model of ADPKD PRINCIPAL INVESTIGATOR: David L. Kaplan CONTRACTING ORGANIZATION...ADDRESS. 1. REPORT DATE October 2015 2. REPORT TYPE Annual Report 3. DATES COVERED 15 Sep 2014 - 14 Sep 2015 4. TITLE AND SUBTITLE 3D

  2. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  3. 3-D Extensions for Trustworthy Systems

    DTIC Science & Technology

    2011-01-01

    3- D Extensions for Trustworthy Systems (Invited Paper) Ted Huffmire∗, Timothy Levin∗, Cynthia Irvine∗, Ryan Kastner† and Timothy Sherwood...address these problems, we propose an approach to trustworthy system development based on 3- D integration, an emerging chip fabrication technique in...which two or more integrated circuit dies are fabricated individually and then combined into a single stack using vertical conductive posts. With 3- D

  4. Hardware Trust Implications of 3-D Integration

    DTIC Science & Technology

    2010-12-01

    enhancing a commod- ity processor with a variety of security functions. This paper examines the 3-D design approach and provides an analysis concluding...of key components. The question addressed by this paper is, “Can a 3-D control plane provide useful secure services when it is conjoined with an...untrust- worthy computation plane?” Design-level investigation of this question yields a definite yes. This paper explores 3- D applications and their

  5. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  6. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  7. Large-Scale Expansion of the Face Representation in Somatosensory Areas of the Lateral Sulcus Following Spinal Cord Injuries in Monkeys

    PubMed Central

    Tandon, Shashank; Kambi, Niranjan; Lazar, Leslee; Mohammed, Hisham; Jain, Neeraj

    2009-01-01

    Transection of dorsal columns of the spinal cord in adult monkeys results in large-scale expansion of the face inputs into the deafferented hand region in the primary somatosensory cortex (area 3b) and the ventroposterior nucleus of thalamus. Here we determined if the upstream cortical areas, secondary somatosensory (S2) and parietal ventral (PV) areas, also undergo reorganization following lesions of the dorsal columns. Areas S2, PV and 3b were mapped after long-term unilateral lesions of the dorsal columns at cervical levels in adult macaque monkeys. In areas S2 and PV, we found neurons responding to touch on the face in regions where normally responses to touch on the hand and other body parts are seen. In the reorganized parts of S2 and PV inputs from the chin as well as other parts of the face were observed, whereas, in area 3b only the chin inputs expand into the deafferented regions. The results show that deafferentations lead to a more widespread brain reorganization than previously known. The data also show that reorganization in areas S2 and PV shares a common substrate with area 3b, but there are specific features that emerge in S2 and PV. PMID:19776287

  8. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  9. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  10. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  11. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  12. A 3D Geostatistical Mapping Tool

    SciTech Connect

    Weiss, W. W.; Stevenson, Graig; Patel, Ketan; Wang, Jun

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  13. 3D Printing. What's the Harm?

    ERIC Educational Resources Information Center

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  14. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary.

  15. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  16. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  17. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  18. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  19. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  20. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  1. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion"…

  2. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  3. Static & Dynamic Response of 3D Solids

    SciTech Connect

    Lin, Jerry

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  4. Parameterization of 3D brain structures for statistical shape analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Litao; Jiang, Tianzi

    2004-05-01

    Statistical Shape Analysis (SSA) is a powerful tool for noninvasive studies of pathophysiology and diagnosis of brain diseases. It also provides a shape constraint for the segmentation of brain structures. There are two key problems in SSA: the representation of shapes and their alignments. The widely used parameterized representations are obtained by preserving angles or areas and the alignments of shapes are achieved by rotating parameter net. However, representations preserving angles or areas do not really guarantee the anatomical correspondence of brain structures. In this paper, we incorporate shape-based landmarks into parameterization of banana-like 3D brain structures to address this problem. Firstly, we get the triangulated surface of the object and extract two landmarks from the mesh, i.e. the ends of the banana-like object. Then the surface is parameterized by creating a continuous and bijective mapping from the surface to a spherical surface based on a heat conduction model. The correspondence of shapes is achieved by mapping the two landmarks to the north and south poles of the sphere and using an extracted origin orientation to select the dateline during parameterization. We apply our approach to the parameterization of lateral ventricle and a multi-resolution shape representation is obtained by using the Discrete Fourier Transform.

  5. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  6. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  7. 2D/3D switchable displays

    NASA Astrophysics Data System (ADS)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  8. 6D Interpretation of 3D Gravity

    NASA Astrophysics Data System (ADS)

    Herfray, Yannick; Krasnov, Kirill; Scarinci, Carlos

    2017-02-01

    We show that 3D gravity, in its pure connection formulation, admits a natural 6D interpretation. The 3D field equations for the connection are equivalent to 6D Hitchin equations for the Chern–Simons 3-form in the total space of the principal bundle over the 3-dimensional base. Turning this construction around one gets an explanation of why the pure connection formulation of 3D gravity exists. More generally, we interpret 3D gravity as the dimensional reduction of the 6D Hitchin theory. To this end, we show that any \\text{SU}(2) invariant closed 3-form in the total space of the principal \\text{SU}(2) bundle can be parametrised by a connection together with a 2-form field on the base. The dimensional reduction of the 6D Hitchin theory then gives rise to 3D gravity coupled to a topological 2-form field.

  9. Biocompatible 3D Matrix with Antimicrobial Properties.

    PubMed

    Ion, Alberto; Andronescu, Ecaterina; Rădulescu, Dragoș; Rădulescu, Marius; Iordache, Florin; Vasile, Bogdan Ștefan; Surdu, Adrian Vasile; Albu, Madalina Georgiana; Maniu, Horia; Chifiriuc, Mariana Carmen; Grumezescu, Alexandru Mihai; Holban, Alina Maria

    2016-01-20

    The aim of this study was to develop, characterize and assess the biological activity of a new regenerative 3D matrix with antimicrobial properties, based on collagen (COLL), hydroxyapatite (HAp), β-cyclodextrin (β-CD) and usnic acid (UA). The prepared 3D matrix was characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Microscopy (FT-IRM), Transmission Electron Microscopy (TEM), and X-ray Diffraction (XRD). In vitro qualitative and quantitative analyses performed on cultured diploid cells demonstrated that the 3D matrix is biocompatible, allowing the normal development and growth of MG-63 osteoblast-like cells and exhibited an antimicrobial effect, especially on the Staphylococcus aureus strain, explained by the particular higher inhibitory activity of usnic acid (UA) against Gram positive bacterial strains. Our data strongly recommend the obtained 3D matrix to be used as a successful alternative for the fabrication of three dimensional (3D) anti-infective regeneration matrix for bone tissue engineering.

  10. Interdisciplinary Data Fusion for Diachronic 3d Reconstruction of Historic Sites

    NASA Astrophysics Data System (ADS)

    Micoli, L. L.; Gonizzi Barsanti, S.; Guidi, G.

    2017-02-01

    In recent decades, 3D reconstruction has progressively become a tool to show archaeological and architectural monuments in their current state, presumed past aspect and to predict their future evolution. The 3D representations trough time can be useful in order to study and preserve the memory of Cultural Heritage and to plan maintenance and promotion of the historical sites. This paper represent a case study, at architectonic and urbanistic scale, based on methodological approach for CH time-varying representations proposed by JPI-CH European Project called Cultural Heritage Through Time (CHT2). The work is focused on the area of Milan Roman circus, relatively to which was conducted both a thorough philological research based on several sources and a 3D survey campaign of still accessible remains, aiming at obtaining the monumental representation of the area in 3 different ages.

  11. Dual-Color 3D Superresolution Microscopy by Combined Spectral-Demixing and Biplane Imaging

    PubMed Central

    Winterflood, Christian M.; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-01-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. PMID:26153696

  12. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  13. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  14. Pathways for Learning from 3D Technology

    PubMed Central

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  15. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  16. Augmented Reality vs Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketomi, Takafumi; Sandor, Christian; Kato, Hirokazu

    2017-01-25

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5% on average compared to AR (p < 0:024). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3% slower in VR than in AR (p < 0:04). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  17. Investigation of out of plane compressive strength of 3D printed sandwich composites

    NASA Astrophysics Data System (ADS)

    Dikshit, V.; Yap, Y. L.; Goh, G. D.; Yang, H.; Lim, J. C.; Qi, X.; Yeong, W. Y.; Wei, J.

    2016-07-01

    In this study, the 3D printing technique was utilized to manufacture the sandwich composites. Composite filament fabrication based 3D printer was used to print the face-sheet, and inkjet 3D printer was used to print the sandwich core structure. This work aims to study the compressive failure of the sandwich structure manufactured by using these two manufacturing techniques. Two different types of core structures were investigated with the same type of face-sheet configuration. The core structures were printed using photopolymer, while the face-sheet was made using nylon/glass. The out-of-plane compressive strength of the 3D printed sandwich composite structure has been examined in accordance with ASTM standards C365/C365-M and presented in this paper.

  18. A 2D range Hausdorff approach to 3D facial recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  19. Development and application of a 3D Cartesian grid Euler method

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Aftosmis, Michael J.; Berger, Marsha J.; Wong, Michael D.

    1995-01-01

    This report describes recent progress in the development and application of 3D Cartesian grid generation and Euler flow solution techniques. Improvements to flow field grid generation algorithms, geometry representations, and geometry refinement criteria are presented, including details of a procedure for correctly identifying and resolving extremely thin surface features. An initial implementation of automatic flow field refinement is also presented. Results for several 3D multi-component configurations are provided and discussed.

  20. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  1. 3D bioprinting of tissues and organs.

    PubMed

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology.

  2. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article.

  3. Medical 3D Printing for the Radiologist

    PubMed Central

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  4. 3D imaging in forensic odontology.

    PubMed

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  5. NUBEAM developments and 3d halo modeling

    NASA Astrophysics Data System (ADS)

    Gorelenkova, M. V.; Medley, S. S.; Kaye, S. M.

    2012-10-01

    Recent developments related to the 3D halo model in NUBEAM code are described. To have a reliable halo neutral source for diagnostic simulation, the TRANSP/NUBEAM code has been enhanced with full implementation of ADAS atomic physic ground state and excited state data for hydrogenic beams and mixed species plasma targets. The ADAS codes and database provide the density and temperature dependence of the atomic data, and the collective nature of the state excitation process. To be able to populate 3D halo output with sufficient statistical resolution, the capability to control the statistics of fast ion CX modeling and for thermal halo launch has been added to NUBEAM. The 3D halo neutral model is based on modification and extension of the ``beam in box'' aligned 3d Cartesian grid that includes the neutral beam itself, 3D fast neutral densities due to CX of partially slowed down fast ions in the beam halo region, 3D thermal neutral densities due to CX deposition and fast neutral recapture source. More details on the 3D halo simulation design will be presented.

  6. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc.

  7. The medial scaffold of 3D unorganized point clouds.

    PubMed

    Leymarie, Frederic F; Kimia, Benjamin B

    2007-02-01

    We introduce the notion of the medial scaffold, a hierarchical organization of the medial axis of a 3D shape in the form of a graph constructed from special medial curves connecting special medial points. A key advantage of the scaffold is that it captures the qualitative aspects of shape in a hierarchical and tightly co