Science.gov

Sample records for 3d facial animation

  1. 3D animation of facial plastic surgery based on computer graphics

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  2. 3D facial expression modeling for recognition

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.; Dass, Sarat C.

    2005-03-01

    Current two-dimensional image based face recognition systems encounter difficulties with large variations in facial appearance due to the pose, illumination and expression changes. Utilizing 3D information of human faces is promising for handling the pose and lighting variations. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a facial surface matching framework to match multiview facial scans to a 3D face model, where the (non-rigid) expression deformation is explicitly modeled for each subject, resulting in a person-specific deformation model. The thin plate spline (TPS) is applied to model the deformation based on the facial landmarks. The deformation is applied to the 3D neutral expression face model to synthesize the corresponding expression. Both the neutral and the synthesized 3D surface models are used to match a test scan. The surface registration and matching between a test scan and a 3D model are achieved by a modified Iterative Closest Point (ICP) algorithm. Preliminary experimental results demonstrate that the proposed expression modeling and recognition-by-synthesis schemes improve the 3D matching accuracy.

  3. Anatomy of emotion: a 3D study of facial mimicry.

    PubMed

    Ferrario, V F; Sforza, C

    2007-01-01

    Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  6. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  7. Modeling 3D facial shape from DNA.

    PubMed

    Claes, Peter; Liberton, Denise K; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E; Pearson, Laurel N; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A; Yao, Wei; Tang, Hua; Barsh, Gregory S; Absher, Devin M; Puts, David A; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K; Boster, James S; Shriver, Mark D

    2014-03-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  8. Modeling 3D Facial Shape from DNA

    PubMed Central

    Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.

    2014-01-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  9. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  10. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  11. 3-D Animation of Typhoon Bopha

    NASA Video Gallery

    This 3-D animation of NASA's TRMM satellite data showed Typhoon Bopha tracking over the Philippines on Dec. 3 and moving into the Sulu Sea on Dec. 4, 2012. TRMM saw heavy rain (red) was falling at ...

  12. 2D/3D image (facial) comparison using camera matching.

    PubMed

    Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

    2006-11-10

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

  13. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  14. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  15. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  16. Languages and interfaces for facial animation

    SciTech Connect

    Magnenat-Thalmann, N.

    1995-05-01

    This paper describes high-level tools for specifying, controlling, and synchronizing temporal and spatial characteristics for 3D animation of facial expressions. The proposed approach consists of hierarchical levels of controls. Specification of expressions, phonemes, emotions, sentences, and head movements by means of a high-level language is shown. The various aspects of synchronization are also emphasized. Then, association of the control different interactive devices and media which allows the animator greater flexibility and freedom, is discussed. Experiments with input accessories such as the keyboard of a music synthesizer and gestures from the DataGlove are illustrated.

  17. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    MOM3D (LAR-15074) is a FORTRAN method-of-moments electromagnetic analysis algorithm for open or closed 3-D perfectly conducting or resistive surfaces. Radar cross section with plane wave illumination is the prime analysis emphasis; however, provision is also included for local port excitation for computing antenna gain patterns and input impedances. The Electric Field Integral Equation form of Maxwell's equations is solved using local triangle couple basis and testing functions with a resultant system impedance matrix. The analysis emphasis is not only for routine RCS pattern predictions, but also for phenomenological diagnostics: bistatic imaging, currents, and near scattered/total electric fields. The images, currents, and near fields are output in form suitable for animation. MOM3D computes the full backscatter and bistatic radar cross section polarization scattering matrix (amplitude and phase), body currents and near scattered and total fields for plane wave illumination. MOM3D also incorporates a new bistatic k space imaging algorithm for computing down range and down/cross range diagnostic images using only one matrix inversion. MOM3D has been made memory and cpu time efficient by using symmetric matrices, symmetric geometry, and partitioned fixed and variable geometries suitable for design iteration studies. MOM3D may be run interactively or in batch mode on 486 IBM PCs and compatibles, UNIX workstations or larger computers. A 486 PC with 16 megabytes of memory has the potential to solve a 30 square wavelength (containing 3000 unknowns) symmetric configuration. Geometries are described using a triangular mesh input in the form of a list of spatial vertex points and a triangle join connection list. The EM-ANIMATE (LAR-15075) program is a specialized visualization program that displays and animates the near-field and surface-current solutions obtained from an electromagnetics program, in particular, that from MOM3D. The EM-ANIMATE program is windows based and

  18. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  19. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  20. NASA TRMM Satellite 3-D Animation of Cyclone Mahasen Rainfall

    NASA Video Gallery

    This animation shows a simulated 3-D analysis of NASA's Tropical Rainfall Measuring Mission (TRMM) satellite's multisatellite Precipitation Analysis (TMPA). It shows rainfall that occurred with tro...

  1. Bringing macromolecular machinery to life using 3D animation.

    PubMed

    Iwasa, Janet H

    2015-04-01

    Over the past decade, there has been a rapid rise in the use of three-dimensional (3D) animation to depict molecular and cellular processes. Much of the growth in molecular animation has been in the educational arena, but increasingly, 3D animation software is finding its way into research laboratories. In this review, I will discuss a number of ways in which 3d animation software can play a valuable role in visualizing and communicating macromolecular structures and dynamics. I will also consider the challenges of using animation tools within the research sphere.

  2. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing.

  3. NASA's 3-D Animation of Tropical Storm Ulika from Space

    NASA Video Gallery

    An animated 3-D flyby of Tropical Storm Ulika using GPM's Radar data showed some strong convective storms inside the tropical storm were dropping precipitation at a rate of over 187 mm (7.4 inches)...

  4. Generating animated sequences from 3D whole-body scans

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Chhatriwala, Murtuza; Mulfinger, Daniel; Deshmukh, Pushkar; Vadhiyar, Sathish

    1999-03-01

    3D images of human subjects are, today, easily obtained using 3D wholebody scanners. 3D human images can provide static information about the physical characteristics of a person, information valuable to professionals such as clothing designers, anthropometrists, medical doctors, physical therapists, athletic trainers, and sculptors. Can 3D human images can be used to provide e more than static physical information. This research described in this paper attempts to answer the question by explaining a way that animated sequences may be generated from a single 3D scan. The process stars by subdividing the human image into segments and mapping the segments to those of a human model defined in a human-motion simulation package. The simulation software provides information used to display movement of the human image. Snapshots of the movement are captured and assembled to create an animated sequence. All of the postures and motion of the human images come from a single 3D scan. This paper describes the process involved in animating human figures from static 3D wholebody scans, presents an example of a generated animated sequence, and discusses possible applications of this approach.

  5. Facial Animations: Future Research Directions & Challenges

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  6. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  7. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  8. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. PMID:20533989

  9. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  10. Genetic and Environmental Contributions to Facial Morphological Variation: A 3D Population-Based Twin Study

    PubMed Central

    Djordjevic, Jelena; Zhurov, Alexei I.; Richmond, Stephen

    2016-01-01

    Introduction Facial phenotype is influenced by genes and environment; however, little is known about their relative contributions to normal facial morphology. The aim of this study was to assess the relative genetic and environmental contributions to facial morphological variation using a three-dimensional (3D) population-based approach and the classical twin study design. Materials and Methods 3D facial images of 1380 female twins from the TwinsUK Registry database were used. All faces were landmarked, by manually placing 37 landmark points, and Procrustes registered. Three groups of traits were extracted and analysed: 19 principal components (uPC) and 23 principal components (sPC), derived from the unscaled and scaled landmark configurations respectively, and 1275 linear distances measured between 51 landmarks (37 manually identified and 14 automatically calculated). The intraclass correlation coefficients, rMZ and rDZ, broad-sense heritability (h2), common (c2) and unique (e2) environment contributions were calculated for all traits for the monozygotic (MZ) and dizygotic (DZ) twins. Results Heritability of 13 uPC and 17 sPC reached statistical significance, with h2 ranging from 38.8% to 78.5% in the former and 30.5% to 84.8% in the latter group. Also, 1222 distances showed evidence of genetic control. Common environment contributed to one PC in both groups and 53 linear distances (4.3%). Unique environment contributed to 17 uPC and 20 sPC and 1245 distances. Conclusions Genetic factors can explain more than 70% of the phenotypic facial variation in facial size, nose (width, prominence and height), lips prominence and inter-ocular distance. A few traits have shown potential dominant genetic influence: the prominence and height of the nose, the lower lip prominence in relation to the chin and upper lip philtrum length. Environmental contribution to facial variation seems to be the greatest for the mandibular ramus height and horizontal facial asymmetry. PMID

  11. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.

    PubMed

    Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo

    2015-06-01

    The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.

  12. Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis

    PubMed Central

    Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana

    2012-01-01

    Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153

  13. Spatially-dense 3D facial asymmetry assessment in both typical and disordered growth

    PubMed Central

    Claes, Peter; Walters, Mark; Vandermeulen, Dirk; Clement, John Gerald

    2011-01-01

    Mild facial asymmetries are common in typical growth patterns. Therefore, detection of disordered facial growth patterns in individuals characterized by asymmetries is preferably accomplished by reference to the typical variation found in the general population rather than to some ideal of perfect symmetry, which rarely exists. This presents a challenge in developing an asymmetry assessment tool that is applicable, without modification, to detect both mild and severe facial asymmetries. In this paper we use concepts from geometric morphometrics to obtain robust and spatially-dense asymmetry assessments using a superimposition protocol for comparison of a face with its mirror image. Spatially-dense localization of asymmetries was achieved using an anthropometric mask consisting of uniformly sampled quasi-landmarks that were automatically indicated on 3D facial images. Robustness, in the sense of an unbiased analysis under increasing asymmetry, was ensured by an adaptive, robust, least-squares superimposition. The degree of overall asymmetry in an individual was scored using a root-mean-squared-error, and the proportion was scored using a novel relative significant asymmetry percentage. This protocol was applied to a database of 3D facial images from 359 young healthy individuals and three individuals with disordered facial growth. Typical asymmetry statistics were derived and were mainly located on, but not limited to, the lower two-thirds of the face in males and females. The asymmetry in males was more extensive and of a greater magnitude than in females. This protocol and proposed scoring of asymmetry with accompanying reference statistics will be useful for the detection and quantification of facial asymmetry in future studies. PMID:21740426

  14. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  15. A 2D range Hausdorff approach to 3D facial recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  16. A 3D character animation engine for multimodal interaction on mobile devices

    NASA Astrophysics Data System (ADS)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  17. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  18. Analyzing the relevance of shape descriptors in automated recognition of facial gestures in 3D images

    NASA Astrophysics Data System (ADS)

    Rodriguez A., Julian S.; Prieto, Flavio

    2013-03-01

    The present document shows and explains the results from analyzing shape descriptors (DESIRE and Spherical Spin Image) for facial recognition of 3D images. DESIRE is a descriptor made of depth images, silhouettes and rays extended from a polygonal mesh; whereas the Spherical Spin Image (SSI) associated to a polygonal mesh point, is a 2D histogram built from neighboring points by using the position information that captures features of the local shape. The database used contains images of facial expressions which in average were recognized 88.16% using a neuronal network and 91.11% with a Bayesian classifier in the case of the first descriptor; in contrast, the second descriptor only recognizes in average 32% and 23,6% using the same mentioned classifiers respectively.

  19. Assessment of some problematic factors in facial image identification using a 2D/3D superimposition technique.

    PubMed

    Atsuchi, Masaru; Tsuji, Akiko; Usumoto, Yosuke; Yoshino, Mineo; Ikeda, Noriaki

    2013-09-01

    The number of criminal cases requiring facial image identification of a suspect has been increasing because a surveillance camera is installed everywhere in the city and furthermore, the intercom with the recording function is installed in the home. In this study, we aimed to analyze the usefulness of a 2D/3D facial image superimposition system for image identification when facial aging, facial expression, and twins are under consideration. As a result, the mean values of the average distances calculated from the 16 anatomical landmarks between the 3D facial images of the 50s groups and the 2D facial images of the 20s, 30s, and 40s groups were 2.6, 2.3, and 2.2mm, respectively (facial aging). The mean values of the average distances calculated from 12 anatomical landmarks between the 3D normal facial images and four emotional expressions were 4.9 (laughter), 2.9 (anger), 2.9 (sadness), and 3.6mm (surprised), respectively (facial expressions). The average distance obtained from 11 anatomical landmarks between the same person in twins was 1.1mm, while the average distance between different person in twins was 2.0mm (twins). Facial image identification using the 2D/3D facial image superimposition system demonstrated adequate statistical power and identified an individual with high accuracy, suggesting its usefulness. However, computer technology concerning video image processing and superimpose progress, there is a need to keep familiar with the morphology and anatomy as its base. PMID:23886899

  20. A coordinate-free method for the analysis of 3D facial change

    NASA Astrophysics Data System (ADS)

    Mao, Zhili; Siebert, Jan Paul; Cockshott, W. Paul; Ayoub, Ashraf Farouk

    2004-05-01

    Euclidean Distance Matrix Analysis (EDMA) is widely held as the most important coordinate-free method by which to analyze landmarks. It has been used extensively in the field of medical anthropometry and has already produced many useful results. Unfortunately this method renders little information regarding the surface on which these points are located and accordingly is inadequate for the 3D analysis of surface anatomy. Here we shall present a new inverse surface flatness metric, the ratio between the Geodesic and the Euclidean inter-landmark distances. Because this metric also only reflects one aspect of three-dimensional shape, i.e. surface flatness, we have combined it with the Euclidean distance to investigate 3D facial change. The goal of this investigation is to be able to analyze three-dimensional facial change in terms of bilateral symmetry as encoded both by surface flatness and by geometric configuration. Our initial study, based on 25 models of surgically managed children (unilateral cleft lip repair) and 40 models of control children at the age of 2 years, indicates that the faces of the surgically managed group were found to be significantly less symmetric than those of the control group in terms of surface flatness, geometric configuration and overall symmetry.

  1. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  2. Measured symmetry of facial 3D shape and perceived facial symmetry and attractiveness before and after orthognathic surgery.

    PubMed

    Ostwald, Julia; Berssenbrügge, Philipp; Dirksen, Dieter; Runte, Christoph; Wermker, Kai; Kleinheinz, Johannes; Jung, Susanne

    2015-05-01

    One aim of cranio-maxillo-facial surgery is to strive for an esthetical appearance. Do facial symmetry and attractiveness correlate? How are they affected by surgery? Within this study faces of patients with orthognathic surgery were captured and analyzed regarding their symmetry. A total of 25 faces of patients were measured three-dimensionally by an optical sensor using the fringe projection technique before and after orthognathic surgery. Based upon this data an asymmetry index was calculated for each case. In order to gather subjective ratings each face was presented to 100 independent test subjects in a 3D rotation sequence. Those were asked to rate the symmetry and the attractiveness of the faces. It was analyzed to what extend the ratings correlate with the measured asymmetry indices and whether pre- and post-surgical data differ. The measured asymmetry indices correlate significantly with the subjective ratings of both items. The measured symmetry as well as the rated symmetry and attractiveness increased on average after surgery. The increase of the ratings was even statistically significant. A larger enhancement of symmetry is achieved in pre-surgical strongly asymmetric faces than in rather symmetric faces.

  3. NASA's 3-D TRMM Satellite Animation of Tropical Storm Andrea

    NASA Video Gallery

    This 3-D view from the west was derived from TRMM Precipitation Radar (PR) data captured when Andrea was examined by the TRMM satellite with the June 5, 2234 UTC (6:34 p.m. EDT) orbit. It clearly s...

  4. Second Life, a 3-D Animated Virtual World: An Alternative Platform for (Art) Education

    ERIC Educational Resources Information Center

    Han, Hsiao-Cheng

    2011-01-01

    3-D animated virtual worlds are no longer only for gaming. With the advance of technology, animated virtual worlds not only are found on every computer, but also connect users with the internet. Today, virtual worlds are created not only by companies, but also through the collaboration of users. Online 3-D animated virtual worlds provide a new…

  5. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  6. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  7. Research on animation design of growing plant based on 3D MAX technology

    NASA Astrophysics Data System (ADS)

    Chen, Yineng; Fang, Kui; Bu, Weiqiong; Zhang, Xiaoling; Lei, Menglong

    In view of virtual plant has practical demands on quality, image and degree of realism animation in growing process of plant, this thesis design the animation based on mechanism and regularity of plant growth, and propose the design method based on 3D MAX technology. After repeated analysis and testing, it is concluded that there are modeling, rendering, animation fabrication and other key technologies in the animation design process. Based on this, designers can subdivid the animation into seed germination animation, plant growth prophase animation, catagen animation, later animation and blossom animation. This paper compounds the animation of these five stages by VP window to realize the completed 3D animation. Experimental result shows that the animation can realized rapid, visual and realistic simulatation the plant growth process.

  8. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. PMID:25872024

  9. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots.

  10. Text2Video: text-driven facial animation using MPEG-4

    NASA Astrophysics Data System (ADS)

    Rurainsky, J.; Eisert, P.

    2005-07-01

    We present a complete system for the automatic creation of talking head video sequences from text messages. Our system converts the text into MPEG-4 Facial Animation Parameters and synthetic voice. A user selected 3D character will perform lip movements synchronized to the speech data. The 3D models created from a single image vary from realistic people to cartoon characters. A voice selection for different languages and gender as well as a pitch shift component enables a personalization of the animation. The animation can be shown on different displays and devices ranging from 3GPP players on mobile phones to real-time 3D render engines. Therefore, our system can be used in mobile communication for the conversion of regular SMS messages to MMS animations.

  11. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  12. Observer success rates for identification of 3D surface reconstructed facial images and implications for patient privacy and security

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Siddiqui, Khan M.; Fort, Leslie; Moffitt, Ryan; Juluru, Krishna; Kim, Woojin; Safdar, Nabile; Siegel, Eliot L.

    2007-03-01

    3D and multi-planar reconstruction of CT images have become indispensable in the routine practice of diagnostic imaging. These tools cannot only enhance our ability to diagnose diseases, but can also assist in therapeutic planning as well. The technology utilized to create these can also render surface reconstructions, which may have the undesired potential of providing sufficient detail to allow recognition of facial features and consequently patient identity, leading to violation of patient privacy rights as described in the HIPAA (Health Insurance Portability and Accountability Act) legislation. The purpose of this study is to evaluate whether 3D reconstructed images of a patient's facial features can indeed be used to reliably or confidently identify that specific patient. Surface reconstructed images of the study participants were created used as candidates for matching with digital photographs of participants. Data analysis was performed to determine the ability of observers to successfully match 3D surface reconstructed images of the face with facial photographs. The amount of time required to perform the match was recorded as well. We also plan to investigate the ability of digital masks or physical drapes to conceal patient identity. The recently expressed concerns over the inability to truly "anonymize" CT (and MRI) studies of the head/face/brain are yet to be tested in a prospective study. We believe that it is important to establish whether these reconstructed images are a "threat" to patient privacy/security and if so, whether minimal interventions from a clinical perspective can substantially reduce this possibility.

  13. Use of Colour and Interactive Animation in Learning 3D Vectors

    ERIC Educational Resources Information Center

    Iskander, Wejdan; Curtis, Sharon

    2005-01-01

    This study investigated the effects of two computer-implemented techniques (colour and interactive animation) on learning 3D vectors. The participants were 43 female Saudi Arabian high school students. They were pre-tested on 3D vectors using a paper questionnaire that consisted of calculation and visualization types of questions. The students…

  14. Advances in animal ecology from 3D-LiDAR ecosystem mapping.

    PubMed

    Davies, Andrew B; Asner, Gregory P

    2014-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Here, we review insights gained through the application of LiDAR to animal ecology studies, revealing the fundamental importance of structure for animals. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential compared with traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. To develop a better understanding of animal dynamics, future studies will benefit from considering 3D habitat effects in a wider variety of ecosystems and with more taxa.

  15. Advances in animal ecology from 3D-LiDAR ecosystem mapping.

    PubMed

    Davies, Andrew B; Asner, Gregory P

    2014-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Here, we review insights gained through the application of LiDAR to animal ecology studies, revealing the fundamental importance of structure for animals. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential compared with traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. To develop a better understanding of animal dynamics, future studies will benefit from considering 3D habitat effects in a wider variety of ecosystems and with more taxa. PMID:25457158

  16. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  17. Advances in animal ecology from 3D ecosystem mapping with LiDAR

    NASA Astrophysics Data System (ADS)

    Davies, A.; Asner, G. P.

    2015-12-01

    The advent and recent advances of Light Detection and Ranging (LiDAR) have enabled accurate measurement of 3D ecosystem structure. Although the use of LiDAR data is widespread in vegetation science, it has only recently (< 14 years) been applied to animal ecology. Despite such recent application, LiDAR has enabled new insights in the field and revealed the fundamental importance of 3D ecosystem structure for animals. We reviewed the studies to date that have used LiDAR in animal ecology, synthesising the insights gained. Structural heterogeneity is most conducive to increased animal richness and abundance, and increased complexity of vertical vegetation structure is more positively influential than traditionally measured canopy cover, which produces mixed results. However, different taxonomic groups interact with a variety of 3D canopy traits and some groups with 3D topography. LiDAR technology can be applied to animal ecology studies in a wide variety of environments to answer an impressive array of questions. Drawing on case studies from vastly different groups, termites and lions, we further demonstrate the applicability of LiDAR and highlight new understanding, ranging from habitat preference to predator-prey interactions, that would not have been possible from studies restricted to field based methods. We conclude with discussion of how future studies will benefit by using LiDAR to consider 3D habitat effects in a wider variety of ecosystems and with more taxa to develop a better understanding of animal dynamics.

  18. Qualitative Assessment of a 3D Simulation Program: Faculty, Students, and Bio-Organic Reaction Animations

    ERIC Educational Resources Information Center

    Günersel, Adalet B.; Fleming, Steven A.

    2013-01-01

    Research shows that computer-based simulations and animations are especially helpful in fields such as chemistry where concepts are abstract and cannot be directly observed. Bio-Organic Reaction Animations (BioORA) is a freely available 3D visualization software program developed to help students understand the chemistry of biomolecular events.…

  19. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  20. Re-thinking 3D printing: A novel approach to guided facial contouring.

    PubMed

    Darwood, Alastair; Collier, Jonathan; Joshi, Naresh; Grant, William E; Sauret-Jackson, Veronique; Richards, Robin; Dawood, Andrew; Kirkpatrick, Niall

    2015-09-01

    Rapid prototyped or three dimensional printed (3D printed) patient specific guides are of great use in many craniofacial and maxillofacial procedures and are extensively described in the literature. These guides are relatively easy to produce and cost effective. However existing designs are limited in that they are unable to be used in procedures requiring the 3D contouring of patient tissues. This paper presents a novel design and approach for the use of three dimensional printing in the production of a patient specific guide capable of fully guiding intraoperative 3D tissue contouring based on a pre-operative plan. We present a case where the technique was used on a patient suffering from an extensive osseous tumour as a result of fibrous dysplasia with encouraging results. PMID:26165757

  1. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    SciTech Connect

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-12-31

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with {sup 18}F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera.

  2. The effectiveness of 3D animations to enhance understanding of cranial cruciate ligament rupture.

    PubMed

    Clements, Dylan N; Broadhurst, Henry; Clarke, Stephen P; Farrell, Michael; Bennett, David; Mosley, John R; Mellanby, Richard J

    2013-01-01

    Cranial cruciate ligament (CCL) rupture is one of the most important orthopedic diseases taught to veterinary undergraduates. The complexity of the anatomy of the canine stifle joint combined with the plethora of different surgical interventions available for the treatment of the disease means that undergraduate veterinary students often have a poor understanding of the pathophysiology and treatment of CCL rupture. We designed, developed, and tested a three dimensional (3D) animation to illustrate the pertinent clinical anatomy of the stifle joint, the effects of CCL rupture, and the mechanisms by which different surgical techniques can stabilize the joint with CCL rupture. When compared with a non-animated 3D presentation, students' short-term retention of functional anatomy improved although they could not impart a better explanation of how different surgical techniques worked. More students found the animation useful than those who viewed a comparable non-animated 3D presentation. Multiple peer-review testing is required to maximize the usefulness of 3D animations during development. Free and open access to such tools should improve student learning and client understanding through wide-spread uptake and use. PMID:23475409

  3. Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Ianeş, Emilia; Roşu, Doina

    2010-09-01

    This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.

  4. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  5. Performance-driven facial animation: basic research on human judgments of emotional state in facial avatars.

    PubMed

    Rizzo, A A; Neumann, U; Enciso, R; Fidaleo, D; Noh, J Y

    2001-08-01

    three-dimensional avatar using a performance-driven facial animation (PDFA) system developed at the University of Southern California Integrated Media Systems Center. PDFA offers a means for creating high-fidelity visual representations of human faces and bodies. This effort explores the feasibility of sensing and reproducing a range of facial expressions with a PDFA system. In order to test concordance of human ratings of emotional expression between video and avatar facial delivery, we first had facial model subjects observe stimuli that were designed to elicit naturalistic facial expressions. The emotional stimulus induction involved presenting text-based, still image, and video clips to subjects that were previously rated to induce facial expressions for the six universals2 of facial expression (happy, sad, fear, anger, disgust, and surprise), in addition to attentiveness, puzzlement and frustration. Videotapes of these induced facial expressions that best represented prototypic examples of the above emotional states and three-dimensional avatar animations of the same facial expressions were randomly presented to 38 human raters. The raters used open-end, forced choice and seven-point Likert-type scales to rate expression in terms of identification. The forced choice and seven-point ratings provided the most usable data to determine video/animation concordance and these data are presented. To support a clear understanding of this data, a website has been set up that will allow readers to view the video and facial animation clips to illustrate the assets and limitations of these types of facial expression-rendering methods (www. USCAvatars.com/MMVR). This methodological first step in our research program has served to provide valuable human user-centered feedback to support the iterative design and development of facial avatar characteristics for expression of emotional communication.

  6. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  7. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    NASA Astrophysics Data System (ADS)

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  8. A new 3D method for measuring cranio-facial relationships with cone beam computed tomography (CBCT)

    PubMed Central

    Cibrián, Rosa; Gandia, Jose L.; Paredes, Vanessa

    2013-01-01

    Objectives: CBCT systems, with their high precision 3D reconstructions, 1:1 images and accuracy in locating cephalometric landmarks, allows us to evaluate measurements from craniofacial structures, so enabling us to replace the anthropometric methods or bidimensional methods used until now. The aims are to analyse cranio-facial relationships in a sample of patients who had previously undergone a CBCT and create a new 3D cephalometric method for assessing and measuring patients. Study Design: 90 patients who had a CBCT (i-Cat®) as a diagnostic register were selected. 12 cephalometric landmarks on the three spatial planes (X,Y,Z) were defined and 21 linear measurements were established. Using these measurements, 7 triangles were described and analysed. With the sides of the triangles: (CdR-Me-CdL); (FzR-Me-FzL); (GoR-N-GoL); and the Gl-Me distance, the ratios between them were analysed. In addition, 4 triangles in the mandible were measured (body: GoR-DB-Me and GoL-DB-Me and ramus: KrR-CdR-GoR and KrL-CdL-GoL). Results: When analyzing the sides of the CdR-Me-CdL triangle, it was found that the 69.33% of the patients could be considered symmetric. Regarding the ratios between the sides of the following triangles: CdR-Me-CdL, FzR-Me-FzL, GoR-N-GoL and the Gl-Me distance, it was found that almost all ratios were close to 1:1 except between the CdR-CdL side with respect the rest of the sides. With regard to the ratios of the 4 triangles of the mandible, it was found that the most symmetrical relationships were those corresponding to the sides of the body of the mandible and the most asymmetrical ones were those corresponding to the base of such triangles. Conclusions: A new method for assessing cranio-facial relationshps using CBCT has been established. It could be used for diverse purposes including diagnosis and treatment planning. Key words:Craniofacial relationship, CBCT, 3D cephalometry. PMID:23524427

  9. V-Man Generation for 3-D Real Time Animation. Chapter 5

    NASA Technical Reports Server (NTRS)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  10. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  11. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  12. 3D reconstruction of internal structure of animal body using near-infrared light

    NASA Astrophysics Data System (ADS)

    Tran, Trung Nghia; Yamamoto, Kohei; Namita, Takeshi; Kato, Yuji; Shimizu, Koichi

    2014-03-01

    To realize three-dimensional (3D) optical imaging of the internal structure of animal body, we have developed a new technique to reconstruct CT images from two-dimensional (2D) transillumination images. In transillumination imaging, the image is blurred due to the strong scattering in the tissue. We had developed a scattering suppression technique using the point spread function (PSF) for a fluorescent light source in the body. In this study, we have newly proposed a technique to apply this PSF for a light source to the image of unknown light-absorbing structure. The effectiveness of the proposed technique was examined in the experiments with a model phantom and a mouse. In the phantom experiment, the absorbers were placed in the tissue-equivalent medium to simulate the light-absorbing organs in mouse body. Near-infrared light was illuminated from one side of the phantom and the image was recorded with CMOS camera from another side. Using the proposed techniques, the scattering effect was efficiently suppressed and the absorbing structure can be visualized in the 2D transillumination image. Using the 2D images obtained in many different orientations, we could reconstruct the 3D image. In the mouse experiment, an anesthetized mouse was held in an acrylic cylindrical holder. We can visualize the internal organs such as kidneys through mouse's abdomen using the proposed technique. The 3D image of the kidneys and a part of the liver were reconstructed. Through these experimental studies, the feasibility of practical 3D imaging of the internal light-absorbing structure of a small animal was verified.

  13. A 3D HIDAC-PET camera with sub-millimeter resolution for imaging small animals

    SciTech Connect

    Jeavons, A.P.; Chandler, R.A.; Dettmar, C.A.R.

    1999-06-01

    A HIDAC-PET camera consisting essentially of 5 million 0.5 mm gas avalanching detectors has been constructed for small-animal imaging. The particular HIDAC advantage--a high 3D spatial resolution--has been improved to 0.95 mm fwhm and to 0.7 mm fwhm when reconstructing with 3D-OSEM methods incorporating resolution recovery. A depth-of-interaction resolution of 2.5 mm is implicit, due to the laminar construction. Scatter-corrected sensitivity, at 8.9 cps/kBq (i.e. 0.9%) from a central point source, or 7.2 cps/kBq (543 cps/kBq/cm{sup 3}) from a distributed (40 mm diameter, 60 mm long) source is now much higher than previous, and other, work. A field-of-view of 100 mm (adjustable to 200 mm) diameter by 210 mm axially permits whole-body imaging of small animals, containing typically 4MBqs of activity, at 40 kcps of which 16% are random coincidences, with a typical scatter fraction of 44%. Throughout the field-of-view there are no positional distortions and relative quantitation is uniform to {+-} 3.5%, but some variation of spatial resolution is found. The performance demonstrates that HIDAC technology is quite appropriate for small-animal PET cameras.

  14. A 3D- and 4D-ESR imaging system for small animals.

    PubMed

    Oikawa, K; Ogata, T; Togashi, H; Yokoyama, H; Ohya-Nishiguchi, H; Kamada, H

    1996-01-01

    A new version of in vivo ESR-CT system composed of custom-made 0.7 GHz ESR spectrometer, air-core magnet with a field-scanning coil, three field-gradient coils, and two computers enables up- and down-field, and rapid magnetic-field scanning linearly controlled by computer. 3D-pictures of distribution of nitroxide radicals injected in brains and livers of rats and mice were obtained in 1.5 min with resolution of 1 mm. We have also succeeded in obtaining spatial-time imagings of the animals.

  15. Special effects used in creating 3D animated scenes-part 1

    NASA Astrophysics Data System (ADS)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more

  16. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  17. Modelling of facial growth in Czech children based on longitudinal data: Age progression from 12 to 15 years using 3D surface models.

    PubMed

    Koudelová, Jana; Dupej, Ján; Brůžek, Jaroslav; Sedlak, Petr; Velemínská, Jana

    2015-03-01

    Dealing with the increasing number of long-term missing children and juveniles requires more precise and objective age progression techniques for the prediction of their current appearance. Our contribution includes detailed and real facial growth information used for modelling age progression during adolescence. This study was based on an evaluation of the overall 180 three-dimensional (3D) facial scans of Czech children (23 boys, 22 girls), which were longitudinally studied from 12 to 15 years of age and thus revealed the real growth-related changes. The boys underwent more marked changes compared with the girls, especially in the regions of the eyebrow ridges, nose and chin. Using modern geometric morphometric methods, together with their applications, we modelled the ageing and allometric trajectories for both sexes and simulated the age-progressed effects on facial scans. The facial parts that are important for facial recognition (eyes, nose, mouth and chin) all deviated less than 0.75mm, whereas the areas with the largest deviations were situated on the marginal parts of the face. The mean error between the predicted and real facial morphology obtained by modelling the children from 12 to 15 years of age was 1.92mm in girls and 1.86mm in boys. This study is beneficial for forensic artists as it reduces the subjectivity of age progression methods.

  18. 3D animations explaining the rotation, libration, and tides of planets

    NASA Astrophysics Data System (ADS)

    van Marcke de Lummen, J.; de Viron, O.; Dehant, V.; Defraigne, P.; Rosenblatt, P.; Karatekin, O.; van Hoolst, T.

    Space missions have always been appealing to the broad audience. In addition, the recent success of several experiments, revealing fascinating new information about the planets of the Solar System has had wide media coverage. For that reason, our team, involved in the preparation and analysis of missions such as Mars Express and BepiColombo is in contact with the press and with the public on a regular basis. It is not always an easy task to explain to a broad audience what we are doing on planets and why, because our science involves rather complicated computations of the deformations of the planet in response to gravitational forces, and of the variations of its orientation in space. Consequently, we decided to create computer animations to explain our work. These animations show how and why the rotations of the planets are not uniform and how and why the planets are changing their orientation in space. The films describe the precession, nutations, and polar motion, characteristics of the terrestrial planets such as the Earth and Mars, and the librations, characteristic of the terrestrial bodies such as Mercury and the icy satellites. The animations also describe the gravitational effects on a spacecraft orbiting around a planet as well as the tides induced by the Sun or the planets on the planets themselves. These films have the advantage of allowing the press and the public to have a correct and appealing representation of phenomena as complex as planetary tides, gravity perturbation on an ˇ orbit, precession, etcE The concept of the animations is a result of many discussions between scientists, and the movies have been generated on Blender, which is an open source software for 3D modeling, animation, rendering, post-production, interactive creation and playback (available on www.blender3D.org). Those movies have been shown very helpful when meeting the press (some of them aired on television), or the science decision makers, which fully justifies the work

  19. A small animal image guided irradiation system study using 3D dosimeters

    NASA Astrophysics Data System (ADS)

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  20. Software Development: 3D Animations and Creating User Interfaces for Realistic Simulations

    NASA Technical Reports Server (NTRS)

    Gordillo, Orlando Enrique

    2015-01-01

    My fall 2015 semester was spent at the Lyndon B. Johnson Space Center working in the Integrated Graphics, Operations, and Analysis Laboratory (IGOAL). My first project was to create a video animation that could tell the story of OMICS. OMICS is a term being used in the field of biomedical science to describe the collective technologies that study biological systems, such as what makes up a cell and how it functions with other systems. In the IGOAL I used a large 23 inch Wacom monitor to draw storyboards, graphics, and line art animations. I used Blender as the 3D environment to sculpt, shape, cut or modify the several scenes and models for the video. A challenge creating this video was to take a term used in biomedical science and describe it in such a way that an 8th grade student can understand. I used a line art style because it would visually set the tone for what we thought was an educational style. In order to get a handle on the perspective and overall feel for the animation without overloading my workspace, I split up the 2 minute animation into several scenes. I used Blender's python scripting capabilities which allowed for the addition of plugins to add or modify tools. The scripts can also directly interact with the objects to create naturalistic patterns or movements. After collecting the rendered scenes, I used Blender's built-in video editing workspace to output the animation. My second project was to write software that emulates a physical system's interface. The interface was to simulate a boat, ROV, and winch system. Simulations are a time and cost effective way to test complicated data and provide training for operators without having to use expensive hardware. We created the virtual controls with 3-D Blender models and 2-D graphics, and then add functionality in C# using the Unity game engine. The Unity engine provides several essential behaviors of a simulator, such as the start and update functions. A framework for Unity, which was developed in

  1. Three-Dimensional Reconstructions Come to Life – Interactive 3D PDF Animations in Functional Morphology

    PubMed Central

    van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander

    2014-01-01

    Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering. PMID:25029366

  2. Three-dimensional reconstructions come to life--interactive 3D PDF animations in functional morphology.

    PubMed

    van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander

    2014-01-01

    Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering. PMID:25029366

  3. Modeling short-term dynamics and variability for realistic interactive facial animation.

    PubMed

    Stoiber, Nicolas; Breton, Gaspard; Seguier, Renaud

    2010-01-01

    Modern modeling and rendering techniques have produced nearly photorealistic face models, but truly expressive digital faces also require natural-looking movements. Virtual characters in today's applications often display unrealistic facial expressions. Indeed, facial animation with traditional schemes such as keyframing and motion capture demands expertise. Moreover, the traditional schemes aren't adapted to interactive applications that require the real-time generation of context-dependent movements. A new animation system produces realistic expressive facial motion at interactive speed. The system relies on a set of motion models controlling facial-expression dynamics. The models are fitted on captured motion data and therefore retain the dynamic signature of human facial expressions. They also contain a nondeterministic component that ensures the variety of the long-term visual behavior. This system can efficiently animate any synthetic face. The video illustrates interactive use of a system that generates facial-animation sequences.

  4. Modeling short-term dynamics and variability for realistic interactive facial animation.

    PubMed

    Stoiber, Nicolas; Breton, Gaspard; Seguier, Renaud

    2010-01-01

    Modern modeling and rendering techniques have produced nearly photorealistic face models, but truly expressive digital faces also require natural-looking movements. Virtual characters in today's applications often display unrealistic facial expressions. Indeed, facial animation with traditional schemes such as keyframing and motion capture demands expertise. Moreover, the traditional schemes aren't adapted to interactive applications that require the real-time generation of context-dependent movements. A new animation system produces realistic expressive facial motion at interactive speed. The system relies on a set of motion models controlling facial-expression dynamics. The models are fitted on captured motion data and therefore retain the dynamic signature of human facial expressions. They also contain a nondeterministic component that ensures the variety of the long-term visual behavior. This system can efficiently animate any synthetic face. The video illustrates interactive use of a system that generates facial-animation sequences. PMID:20650728

  5. Weapon identification using antemortem CT with 3D reconstruction, is it always possible?--A report in a case of facial blunt and sharp injuries using an ashtray.

    PubMed

    Aromatario, Mariarosaria; Cappelletti, Simone; Bottoni, Edoardo; Fiore, Paola Antonella; Ciallella, Costantino

    2016-01-01

    An interesting case of homicide involving the use of a heavy glass ashtray is described. The victim, a 81-years-old woman, has survived for few days and died in hospital. The external examination of the victim showed extensive blunt and sharp facial injuries and defense injuries on both the hands. The autopsy examination showed numerous tears on the face, as well as multiple fractures of the facial bones. Computer tomography scan, with 3D reconstruction, performed in hospital before death, was used to identify the weapon used for the crime. In recent years new diagnostics tools such as computer tomography has been widely used, especially in cases involving sharp and blunt forces. Computer tomography has proven to be very valuable in analyzing fractures of the cranial teca for forensic purpose, in particular antemortem computer tomography with 3D reconstruction is becoming an important tool in the process of weapon identification, thanks to the possibility to identify and make comparison between the shape of the object used to commit the crime, the injury and the objects found during the investigations. No previous reports on the use of this technique, for the weapon identification process, in cases of isolated facial fractures were described. We report a case in which, despite the correct use of this technique, it was not possible for the forensic pathologist to identify the weapon used to commit the crime. Authors wants to highlight the limits encountered in the use of computer tomography with 3D reconstruction as a tool for weapon identification when facial fractures occurred.

  6. Dense mesh sampling for video-based facial animation

    NASA Astrophysics Data System (ADS)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.

  7. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation.

    PubMed

    Schultz, Julia A; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  8. Function of pretribosphenic and tribosphenic mammalian molars inferred from 3D animation

    NASA Astrophysics Data System (ADS)

    Schultz, Julia A.; Martin, Thomas

    2014-10-01

    Appearance of the tribosphenic molar in the Late Jurassic (160 Ma) is a crucial innovation for food processing in mammalian evolution. This molar type is characterized by a protocone, a talonid basin and a two-phased chewing cycle, all of which are apomorphic. In this functional study on the teeth of Late Jurassic Dryolestes leiriensis and the living marsupial Monodelphis domestica, we demonstrate that pretribosphenic and tribosphenic molars show fundamental differences of food reduction strategies, representing a shift in dental function during the transition of tribosphenic mammals. By using the Occlusal Fingerprint Analyser (OFA), we simulated the chewing motions of the pretribosphenic Dryolestes that represents an evolutionary precursor condition to such tribosphenic mammals as Monodelphis. Animation of chewing path and detection of collisional contacts between virtual models of teeth suggests that Dryolestes differs from the classical two-phased chewing movement of tribosphenidans, due to the narrowing of the interdental space in cervical (crown-root transition) direction, the inclination angle of the hypoflexid groove, and the unicuspid talonid. The pretribosphenic chewing cycle is equivalent to phase I of the tribosphenic chewing cycle, but the former lacks phase II of the tribosphenic chewing. The new approach can analyze the chewing cycle of the jaw by using polygonal 3D models of tooth surfaces, in a way that is complementary to the electromyography and strain gauge studies of muscle function of living animals. The technique allows alignment and scaling of isolated fossil teeth and utilizes the wear facet orientation and striation of the teeth to reconstruct the chewing path of extinct mammals.

  9. NON-INVASIVE 3D FACIAL ANALYSIS AND SURFACE ELECTROMYOGRAPHY DURING FUNCTIONAL PRE-ORTHODONTIC THERAPY: A PRELIMINARY REPORT

    PubMed Central

    Tartaglia, Gianluca M.; Grandi, Gaia; Mian, Fabrizio; Sforza, Chiarella; Ferrario, Virgilio F.

    2009-01-01

    Objectives: Functional orthodontic devices can modify oral function thus permitting more adequate growth processes. The assessment of their effects should include both facial morphology and muscle function. This preliminary study investigated whether a preformed functional orthodontic device could induce variations in facial morphology and function along with correction of oral dysfunction in a group of orthodontic patients in the mixed and early permanent dentitions. Material and Methods: The three-dimensional coordinates of 50 facial landmarks (forehead, eyes, nose, cheeks, mouth, jaw and ears) were collected in 10 orthodontic male patients aged 8-13 years, and in 89 healthy reference boys of the same age. Soft tissue facial angles, distances, and ratios were computed. Surface electromyography of the masseter and temporalis muscles was performed, and standardized symmetry, muscular torque and activity were calculated. Soft-tissue facial modifications were analyzed non-invasively before and after a 6-month treatment with a functional device. Comparisons were made with z-scores and paired Student's t-tests. Results: The 6-month treatment stimulated mandibular growth in the anterior and inferior directions, with significant variations in three-dimensional facial divergence and facial convexity. The modifications were larger in the patients than in reference children. In several occasions, the discrepancies relative to the norm became not significant after treatment. No significant variations in standardized muscular activity were found. Conclusions: Preliminary results showed that the continuous and correct use of the functional device induced measurable intraoral (dental arches) and extraoral (face) morphological modifications. The device did not modify the functional equilibrium of the masticatory muscles. PMID:19936531

  10. Analysis of Age-Related Changes in Asian Facial Skeletons Using 3D Vector Mathematics on Picture Archiving and Communication System Computed Tomography

    PubMed Central

    Kim, Soo Jin; Kim, So Jung; Park, Jee Soo; Byun, Sung Wan

    2015-01-01

    Purpose There are marked differences in facial skeletal characteristics between Asian and Caucasian. However, ethnic differences in age-related facial skeletal changes have not yet been fully established. The aims of this study were to evaluate age-related changes in Asian midfacial skeletons and to explore ethnic differences in facial skeletal structures with aging between Caucasian and Asian. Materials and Methods The study included 108 men (aged 20-79 years) and 115 women (aged 20-81 years). Axial CT images with a gantry tilt angle of 0 were analyzed. We measured three-dimensional (3D) coordinates at each point with a pixel lens cursor in a picture archiving and communication system (PACS), and angles and widths between the points were calculated using 3D vector mathematics. We analyzed angular changes in 4 bony regions, including the glabellar, orbital, maxillary, and pyriform aperture regions, and changes in the orbital aperture width (distance from the posterior lacrimal crest to the frontozygomatic suture) and the pyriform width (between both upper margins of the pyriform aperture). Results All 4 midfacial angles in females and glabellar and maxillary angles in males showed statistically significant decreases with aging. On the other hand, the orbital and pyriform widths did not show statistically significant changes with aging. Conclusion The results of this study suggest that Asian midfacial skeletons may change continuously throughout life, and that there may be significant differences in the midfacial skeleton between both sexes and between ethnic groups. PMID:26256986

  11. Morphologic Analysis of the Temporomandibular Joint Between Patients With Facial Asymmetry and Asymptomatic Subjects by 2D and 3D Evaluation

    PubMed Central

    Zhang, Yuan-Li; Song, Jin-Lin; Xu, Xian-Chao; Zheng, Lei-Lei; Wang, Qing-Yuan; Fan, Yu-Bo; Liu, Zhan

    2016-01-01

    Abstract Signs and symptoms of temporomandibular joint (TMJ) dysfunction are commonly found in patients with facial asymmetry. Previous studies on the TMJ position have been limited to 2-dimensional (2D) radiographs, computed tomography (CT), or cone-beam computed tomography (CBCT). The purpose of this study was to compare the differences of TMJ position by using 2D CBCT and 3D model measurement methods. In addition, the differences of TMJ positions between patients with facial asymmetry and asymptomatic subjects were investigated. We prospectively recruited 5 patients (cases, mean age, 24.8 ± 2.9 years) diagnosed with facial asymmetry and 5 asymptomatic subjects (controls, mean age, 26 ± 1.2 years). The TMJ spaces, condylar and ramus angles were assessed by using 2D and 3D methods. The 3D models of mandible, maxilla, and teeth were reconstructed with the 3D image software. The variables in each group were assessed by t-test and the level of significance was 0.05. There was a significant difference in the horizontal condylar angle (HCA), coronal condylar angle (CCA), sagittal ramus angle (SRA), medial joint space (MJS), lateral joint space (LJS), superior joint space (SJS), and anterior joint space (AJS) measured in the 2D CBCT and in the 3D models (P < 0.05). The case group had significantly smaller SJS compared to the controls on both nondeviation side (P = 0.009) and deviation side (P = 0.004). In the case group, the nondeviation SRA was significantly larger than the deviation side (P = 0.009). There was no significant difference in the coronal condylar width (CCW) in either group. In addition, the anterior disc displacement (ADD) was more likely to occur on the deviated side in the case group. In conclusion, the 3D measurement method is more accurate and effective for clinicians to investigate the morphology of TMJ than the 2D method. PMID:27043669

  12. Morphologic Analysis of the Temporomandibular Joint Between Patients With Facial Asymmetry and Asymptomatic Subjects by 2D and 3D Evaluation: A Preliminary Study.

    PubMed

    Zhang, Yuan-Li; Song, Jin-Lin; Xu, Xian-Chao; Zheng, Lei-Lei; Wang, Qing-Yuan; Fan, Yu-Bo; Liu, Zhan

    2016-03-01

    Signs and symptoms of temporomandibular joint (TMJ) dysfunction are commonly found in patients with facial asymmetry. Previous studies on the TMJ position have been limited to 2-dimensional (2D) radiographs, computed tomography (CT), or cone-beam computed tomography (CBCT). The purpose of this study was to compare the differences of TMJ position by using 2D CBCT and 3D model measurement methods. In addition, the differences of TMJ positions between patients with facial asymmetry and asymptomatic subjects were investigated. We prospectively recruited 5 patients (cases, mean age, 24.8 ± 2.9 years) diagnosed with facial asymmetry and 5 asymptomatic subjects (controls, mean age, 26 ± 1.2 years). The TMJ spaces, condylar and ramus angles were assessed by using 2D and 3D methods. The 3D models of mandible, maxilla, and teeth were reconstructed with the 3D image software. The variables in each group were assessed by t-test and the level of significance was 0.05. There was a significant difference in the horizontal condylar angle (HCA), coronal condylar angle (CCA), sagittal ramus angle (SRA), medial joint space (MJS), lateral joint space (LJS), superior joint space (SJS), and anterior joint space (AJS) measured in the 2D CBCT and in the 3D models (P < 0.05). The case group had significantly smaller SJS compared to the controls on both nondeviation side (P = 0.009) and deviation side (P = 0.004). In the case group, the nondeviation SRA was significantly larger than the deviation side (P = 0.009). There was no significant difference in the coronal condylar width (CCW) in either group. In addition, the anterior disc displacement (ADD) was more likely to occur on the deviated side in the case group. In conclusion, the 3D measurement method is more accurate and effective for clinicians to investigate the morphology of TMJ than the 2D method. PMID:27043669

  13. Animal testing using 3D microwave tomography system for breast cancer detection.

    PubMed

    Lee, Jong Moon; Son, Sung Ho; Kim, Hyuk Je; Kim, Bo Ra; Choi, Heyng Do; Jeon, Soon Ik

    2014-01-01

    The three dimensional microwave tomography (3D MT) system of the Electronics and Telecommunications Research Institute (ETRI) comprises an antenna array, transmitting receiving module, switch matrix module and a signal processing component. This system also includes a patient interface bed as well as a 3D reconstruction algorithm. Here, we perform a comparative analysis of image reconstruction results using the assembled system and MRI results, which is used to image the breasts of dogs. Microwave imaging reconstruction results (at 1,500 MHz) obtained using the ETRI 3D MT system are presented. The system provides computationally reliable diagnosis results from the reconstructed MT Image. PMID:25160233

  14. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    ERIC Educational Resources Information Center

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  15. Comparison of the Reliability of Anatomic Landmarks based on PA Cephalometric Radiographs and 3D CT Scans in Patients with Facial Asymmetry

    PubMed Central

    Rathee, Pooja; Jain, Pradeep; Panwar, Vasim Raja

    2011-01-01

    Introduction Conventional cephalometry is an inexpensive and well-established method for evaluating patients with dentofacial deformities. However, patients with major deformities and in particular asymmetric cases are difficult to evaluate by conventional cephalometry. Reliable and accurate evaluation in the orbital and midfacial region in craniofacial syndrome patients is difficult due to inherent geometric magnification, distortion and the superpositioning of the craniofacial structures on cephalograms. Both two- and three-dimensional computed tomography (CT) have been proposed to alleviate some of these difficulties. Aims and objectives The aim of our study is to compare the reliability of anatomic cephalometric points obtained from the two modalities: Conventional posteroanterior cephalograms and 3D CT of patients with facial asymmetry, by comparison of intra- and interobserver variation of points recorded from frontal X-ray to those recorded from 3D CT. Materials and methods The sample included nine patients (5 males and 4 females) with an age range of 14 to 21 years and a mean age of 17.11 years, whose treatment plan called for correction of facial asymmetry. All CT scans were measured twice by two investigators with 2 weeks separation for determination of intraobserver and interobserver variability. Similarly, all measurement points on the frontal cephalograms were traced twice with 2 weeks separation. The tracings were superimposed and the average distance between replicate points readings were used as a measure of intra- and interobserver reliability. Intra-and interobserver variations are calculated for each method and the data were imported directly into the statistical program, SPSS 10.0.1 for windows. Results Intraobserver variations of points defined on 3D CT were small compared with frontal cephalograms. The intraobserver variations ranged from 0 (A1, B1) to 0.6 mm with the variations less than 0.5 mm for most of the points. Interobserver variations

  16. Improving Social Understanding of Individuals of Intellectual and Developmental disabilities through a 3D-Facial Expression Intervention Program

    ERIC Educational Resources Information Center

    Cheng, Yufang; Chen, Shuhui

    2010-01-01

    Individuals with intellectual and developmental disabilities (IDD) have specific difficulties in cognitive social-emotional capability, which affect numerous aspects of social competence. This study evaluated the learning effects of using 3D-emotion system intervention program for individuals with IDD in learning socially based-emotions capability…

  17. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students

    PubMed Central

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    Background: There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. Materials and Methods: 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. Results: A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p<0.001) at baseline, immediate and after 1 month. At baseline, all the clinical parameters in the both the groups were similar and showed a significant reduction (p<0.001)p after 1 month, whereas no significant difference was noticed post intervention between the groups. Conclusion: 3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes. PMID:26759805

  18. Denoising of high resolution small animal 3D PET data using the non-subsampled Haar wavelet transform

    NASA Astrophysics Data System (ADS)

    Ochoa Domínguez, Humberto de Jesús; Máynez, Leticia O.; Vergara Villegas, Osslan O.; Mederos, Boris; Mejía, José M.; Cruz Sánchez, Vianey G.

    2015-06-01

    PET allows functional imaging of the living tissue. However, one of the most serious technical problems affecting the reconstructed data is the noise, particularly in images of small animals. In this paper, a method for high-resolution small animal 3D PET data is proposed with the aim to reduce the noise and preserve details. The method is based on the estimation of the non-subsampled Haar wavelet coefficients by using a linear estimator. The procedure is applied to the volumetric images, reconstructed without correction factors (plane reconstruction). Results show that the method preserves the structures and drastically reduces the noise that contaminates the image.

  19. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  20. Teaching 3D computer animation to illustrators: the instructor as translator and technical director.

    PubMed

    Koning, Wobbe F

    2012-01-01

    An art instructor discusses the difficulties he's encountered teaching computer graphics skills to undergraduate art students. To help the students, he introduced an automated-rigging script for character animation. PMID:24806989

  1. Creating photo-realistic works in a 3D scene using layers styles to create an animation

    NASA Astrophysics Data System (ADS)

    Avramescu, A. M.

    2015-11-01

    Creating realist objects in a 3D scene is not an easy work. We have to be very careful to make the creation very detailed. If we don't know how to make these photo-realistic works, by using the techniques and a good reference photo we can create an amazing amount of detail and realism. For example, in this article there are some of these detailed methods from which we can learn the techniques necessary to make beautiful and realistic objects in a scene. More precisely, in this paper, we present how to create a 3D animated scene, mainly using the Pen Tool and Blending Options. Indeed, this work is based on teaching some simple ways of using the Layer Styles to create some great shadows, lights, textures and a realistic sense of 3 Dimension. The present work involves also showing how some interesting ways of using the illuminating and rendering options can create a realistic effect in a scene. Moreover, this article shows how to create photo realistic 3D models from a digital image. The present work proposes to present how to use Illustrator paths, texturing, basic lighting and rendering, how to apply textures and how to parent the building and objects components. We also propose to use this proposition to recreate smaller details or 3D objects from a 2D image. After a critic art stage, we are able now to present in this paper the architecture of a design method that proposes to create an animation. The aim is to create a conceptual and methodological tutorial to address this issue both scientifically and in practice. This objective also includes proposing, on strong scientific basis, a model that gives the possibility of a better understanding of the techniques necessary to create a realistic animation.

  2. Sign Language for K-8 Mathematics by 3D Interactive Animation

    ERIC Educational Resources Information Center

    Adamo-Villani, Nicoletta; Doublestein, John; Martin, Zachary

    2005-01-01

    We present a new highly interactive computer animation tool to increase the mathematical skills of deaf children. We aim at increasing the effectiveness of (hearing) parents in teaching arithmetic to their deaf children, and the opportunity of deaf children to learn arithmetic via interactive media. Using state-of-the-art computer animation…

  3. 3-D Computer Animation vs. Live-Action Video: Differences in Viewers' Response to Instructional Vignettes

    ERIC Educational Resources Information Center

    Smith, Dennie; McLaughlin, Tim; Brown, Irving

    2012-01-01

    This study explored computer animation vignettes as a replacement for live-action video scenarios of classroom behavior situations previously used as an instructional resource in teacher education courses in classroom management strategies. The focus of the research was to determine if the embedded behavioral information perceived in a live-action…

  4. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    SciTech Connect

    Qian, X; Wuu, C; Admovics, J

    2014-06-01

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  5. Precise Animated 3-D Displays Of The Heart Constructed From X-Ray Scatter Fields

    NASA Astrophysics Data System (ADS)

    McInerney, J. J.; Herr, M. D.; Copenhaver, G. L.

    1986-01-01

    A technique, based upon the interrogation of x-ray scatter, has been used to construct precise animated displays of the three-dimensional surface of the heart throughout the cardiac cycle. With the selection of motion amplification, viewing orientation, beat rate, and repetitive playbacks of isolated segments of the cardiac cycle, these displays are used to directly visualize epicardial surface velocity and displacement patterns, to construct regional maps of old or new myocardial infarction, and to visualize diastolic stiffening of the ventricle associated with acute ischemia. The procedure is non-invasive. Cut-downs or injections are not required.

  6. Development of 3D multimedia with advanced computer animation tools for outreach activities related to Meteor Science and Meteoritics

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.

    2012-09-01

    Documentaries related to Astronomy and Planetary Sciences are a common and very attractive way to promote the interest of the public in these areas. These educational tools can get benefit from new advanced computer animation software and 3D technologies, as these allow making these documentaries even more attractive. However, special care must be taken in order to guarantee that the information contained in them is serious and objective. In this sense, an additional value is given when the footage is produced by the own researchers. With this aim, a new documentary produced and directed by Prof. Madiedo has been developed. The documentary, which has been entirely developed by means of advanced computer animation tools, is dedicated to several aspects of Meteor Science and Meteoritics. The main features of this outreach and education initiative are exposed here.

  7. Images of Soft-bodied Animals with External Hard Shell: 3D Visualization of the Embedded Soft Tissue

    SciTech Connect

    Rao, Donepudi V.; Akatsuka, Takao; Tromba, Giuliana

    2004-05-12

    Images of soft-bodied animals (snails) of various types with external hard shell are obtained for 25, 27 and 29 keV synchrotron X-rays. The SYRMEP facility at Elettra,Trieste, Italy and the associated detection system has been used for the image acquisition. The interior properties of the embedded soft tissue are analysed utilizing the software. From the reconstructed images, the soft tissue distribution, void spaces associated with the soft tissue and external hard shell are identified. 3D images are reconstructed at these energies and optimum energy is chosen based on the quality of the image for further analysis. The optimum energy allowed us to visualize the visibility of low absorbing details and interior microstructure of the embedded soft tissue.

  8. Facial Image Classification of Mouse Embryos for the Animal Model Study of Fetal Alcohol Syndrome

    PubMed Central

    Fang, Shiaofen; Liu, Ying; Huang, Jeffrey; Vinci-Booher, Sophia; Anthony, Bruce; Zhou, Feng

    2010-01-01

    Fetal Alcohol Syndrome (FAS) is a developmental disorder caused by maternal drinking during pregnancy. Computerize imaging techniques have been applied to study human facial dysmorphology associated with FAS. This paper describes a new facial image analysis method based on a multi-angle image classification technique using micro-video images of mouse embryo. Images taken from several different angles are analyzed separately, and the results are combined for classifications that separate embryos with and without alcohol exposures. Analysis results from animal models provide critical references for the understanding of FAS and potential therapy solutions for human patients. PMID:20502627

  9. Comparison of two computer animated imaging programs for quantifying facial profile preference.

    PubMed

    Giddon, D B; Bernier, D L; Evans, C A; Kinchen, J A

    1996-06-01

    To establish the physical basis of subjective judgements of facial appearance, two novel computer-imaging programs differing in method of preparation and presentation of 5 features of the facial soft-tissue profile of 4 faces representing 4 different classifications of dental occlusion were compared. Images of facial soft tissue of 5 features were digitized and "animated" from 16 discrete distortions or morphed from the two extremes of each feature. 12 volunteer judges responded to both the "animated" and morphed presentations by pressing the computer mouse button when the image became acceptable and releasing the button when the image was no longer acceptable. They also pressed the mouse button when the most pleasing distortion appeared from either direction. Aggregating responses to counterbalanced trials and features across judges yielded high correlations between the programs for midpoint of acceptability. Although both programs provide reliable and valid measures of subjective acceptability of present and proposed changes in facial morphology, the new morphing program is more user-friendly than the "animated" method. PMID:8823891

  10. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  11. Reliability of a three-dimensional method for measuring facial animation: a case report.

    PubMed

    Trotman, C A; Gross, M M; Moffatt, K

    1996-01-01

    Reliable methods of quantifying functional impairment of the craniofacial region are sorely lacking. The purpose of this study was to test the reliability of a three-dimensional method for assessing the functional repertoire of the face. Subjects were instructed to perform repeated sequences of five maximal facial animations. Facial motions were captured by three 60-Hz video cameras, and three-dimensional maximum motion amplitudes were calculated. Student's t-test and Pearson product-moment correlation coefficients were used to test for significant differences between repetitions. The results show moderate to excellent reliability of the amplitude of motion for the landmarks over all animations. For each specific animation, certain landmarks demonstrated excellent reliability of motion.

  12. The masseteric nerve: a versatile power source in facial animation techniques.

    PubMed

    Bianchi, B; Ferri, A; Ferrari, S; Copelli, C; Salvagni, L; Sesenna, E

    2014-03-01

    The masseteric nerve has many advantages including low morbidity, its proximity to the facial nerve, the strong motor impulse, its reliability, and the fast reinnervation that is achievable in most patients. Reinnervation of a neuromuscular transplant is the main indication for its use, but it has been used for the treatment of recent facial palsies with satisfactory results. We have retrospectively evaluated 60 patients who had facial animation procedures using the masseteric nerve during the last 10 years. The patients included those with recent, and established or congenital, unilateral and bilateral palsies. The masseteric nerve was used for coaptation of the facial nerve either alone or in association with crossfacial nerve grafting, or for the reinnervation of gracilis neuromuscular transplants. Reinnervation was successful in all cases, the mean (range) time being 4 (2-5) months for facial nerve coaptation and 4 (3-7) months for neuromuscular transplants. Cosmesis was evaluated (moderate, n=10, good, n=30, and excellent, n=20) as was functional outcome (no case of impairment of masticatory function, all patients able to smile, and achievement of a smile independent from biting). The masseteric nerve has many uses, including in both recent, and established or congenital, cases. In some conditions it is the first line of treatment. The combination of combined techniques gives excellent results in unilateral palsies and should therefore be considered a valid option.

  13. Estimation of bisphenol A-Human toxicity by 3D cell culture arrays, high throughput alternatives to animal tests.

    PubMed

    Lee, Dong Woo; Oh, Woo-Yeon; Yi, Sang Hyun; Ku, Bosung; Lee, Moo-Yeal; Cho, Yoon Hee; Yang, Mihi

    2016-09-30

    Bisphenol A (BPA) has been widely used for manufacturing polycarbonate plastics and epoxy resins and has been extensively tested in animals to predict human toxicity. In order to reduce the use of animals for toxicity assessment and provide further accurate information on BPA toxicity in humans, we encapsulated Hep3B human hepatoma cells in alginate and cultured them in three dimensions (3D) on a micropillar chip coupled to a panel of metabolic enzymes on a microwell chip. As a result, we were able to assess the toxicity of BPA under various metabolic enzyme conditions using a high-throughput and micro assay; sample volumes were nearly 2,000 times less than that required for a 96-well plate. We applied a total of 28 different enzymes to each chip, including 10 cytochrome P450s (CYP450s), 10 UDP-glycosyltransferases (UGTs), 3 sulfotransferases (SULTs), alcohol dehydrogenase (ADH), and aldehyde dehydrogenase 2 (ALDH2). Phase I enzyme mixtures, phase II enzyme mixtures, and a combination of phase I and phase II enzymes were also applied to the chip. BPA toxicity was higher in samples containing CYP2E1 than controls, which contained no enzymes (IC50, 184±16μM and 270±25.8μM, respectively, p<0.01). However, BPA-induced toxicity was alleviated in the presence of ADH (IC50, 337±17.9μM), ALDH2 (335±13.9μM), and SULT1E1 (318±17.7μM) (p<0.05). CYP2E1-mediated cytotoxicity was confirmed by quantifying unmetabolized BPA using HPLC/FD. Therefore, we suggest the present micropillar/microwell chip platform as an effective alternative to animal testing for estimating BPA toxicity via human metabolic systems. PMID:27491884

  14. Facial animation in patients with Moebius and Moebius-like syndromes.

    PubMed

    Bianchi, B; Copelli, C; Ferrari, S; Ferri, A; Sesenna, E

    2010-11-01

    Moebius syndrome, a rare congenital disorder of varying severity, involves multiple cranial nerves and is characterised predominantly by bilateral or unilateral paralysis of the facial and abducens nerves. Facial paralysis causes inability to smile and bilabial incompetence with speech difficulties, oral incompetence, problems with eating and drinking, including pocketing of food in the cheek and dribbling, as well as severe drooling. Other relevant clinical findings are incomplete eye closure and convergent strabismus. The authors report on 48 patients with Moebius and Moebius-like syndromes seen from 2003 to September 2007 (23 males and 25 females, mean age 13.9 years). In 20 cases a reinnervated gracilis transplant was performed to re-animate the impaired sides of the face. In this series, all free-muscle transplantations survived the transfer, and no flap was lost. In 19 patients complete reinnervation of the muscle was observed with an excellent or good facial symmetry at rest in all patients and whilst smiling in 87% of cases. In conclusion, according to the literature, the gracilis muscle free transfer can be considered a safe and reliable technique for facial reanimation with good aesthetic and functional results.

  15. Lighted display devices for producing static or animated visual displays, including animated facial features

    DOEpatents

    Heilbron, Valerie J; Clem, Paul G; Cook, Adam Wade

    2014-02-11

    An illuminated display device with a base member with a plurality of cavities therein. Illumination devices illuminate the cavities and emit light through an opening of the cavities in a pattern, and a speaker can emit sounds in synchronization with the pattern. A panel with translucent portions can overly the base member and the cavities. An animated talking character can have an animated mouth cavity complex with multiple predetermined mouth lighting configurations simulative of human utterances. The cavities can be open, or optical waveguide material or positive members can be disposed therein. Reflective material can enhance internal reflectance and light emission.

  16. Personal identification by the comparison of facial profiles: testing the reliability of a high-resolution 3D-2D comparison model.

    PubMed

    Cattaneo, Cristina; Cantatore, Angela; Ciaffi, Romina; Gibelli, Daniele; Cigada, Alfredo; De Angelis, Danilo; Sala, Remo

    2012-01-01

    Identification from video surveillance systems is frequently requested in forensic practice. The "3D-2D" comparison has proven to be reliable in assessing identification but still requires standardization; this study concerns the validation of the 3D-2D profile comparison. The 3D models of the faces of five individuals were compared with photographs from the same subjects as well as from another 45 individuals. The difference in area and distance between maxima (glabella, tip of nose, fore point of upper and lower lips, pogonion) and minima points (selion, subnasale, stomion, suprapogonion) were measured. The highest difference in area between the 3D model and the 2D image was between 43 and 133 mm(2) in the five matches, always greater than 157 mm(2) in mismatches; the mean distance between the points was greater than 1.96 mm in mismatches, <1.9 mm in five matches (p < 0.05). These results indicate that this difference in areas may point toward a manner of distinguishing "correct" from "incorrect" matches.

  17. Simultaneous real-time 3D photoacoustic tomography and EEG for neurovascular coupling study in an animal model of epilepsy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Xiao, Jiaying; Jiang, Huabei

    2014-08-01

    Objective. Neurovascular coupling in epilepsy is poorly understood; its study requires simultaneous monitoring of hemodynamic changes and neural activity in the brain. Approach. Here for the first time we present a combined real-time 3D photoacoustic tomography (PAT) and electrophysiology/electroencephalography (EEG) system for the study of neurovascular coupling in epilepsy, whose ability was demonstrated with a pentylenetetrazol (PTZ) induced generalized seizure model in rats. Two groups of experiments were carried out with different wavelengths to detect the changes of oxy-hemoglobin (HbO2) and deoxy-hemoglobin (HbR) signals in the rat brain. We extracted the average PAT signals of the superior sagittal sinus (SSS), and compared them with the EEG signal. Main results. Results showed that the seizure process can be divided into three stages. A ‘dip’ lasting for 1-2 min in the first stage and the following hyperfusion in the second stage were observed. The HbO2 signal and the HbR signal were generally negatively correlated. The change of blood flow was also estimated. All the acquired results here were in accordance with other published results. Significance. Compared to other existing functional neuroimaging tools, the method proposed here enables reliable tracking of hemodynamic signal with both high spatial and high temporal resolution in 3D, so it is more suitable for neurovascular coupling study of epilepsy.

  18. Three-Dimensional Facial Adaptation for MPEG-4 Talking Heads

    NASA Astrophysics Data System (ADS)

    Grammalidis, Nikos; Sarris, Nikos; Deligianni, Fani; Strintzis, Michael G.

    2002-12-01

    This paper studies a new method for three-dimensional (3D) facial model adaptation and its integration into a text-to-speech (TTS) system. The 3D facial adaptation requires a set of two orthogonal views of the user's face with a number of feature points located on both views. Based on the correspondences of the feature points' positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text to speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.

  19. A full-field and real-time 3D surface imaging augmented DOT system for in-vivo small animal studies

    NASA Astrophysics Data System (ADS)

    Yi, Steven X.; Yang, Bingcheng; Yin, Gongjie

    2010-02-01

    A crucial parameter in Diffuse Optical Tomography (DOT) is the construction of an accurate forward model, which greatly depends on tissue boundary. Since photon propagation is a three-dimensional volumetric problem, extraction and subsequent modeling of three-dimensional boundaries is essential. Original experimental demonstration of the feasibility of DOT to reconstruct absorbers, scatterers and fluorochromes used phantoms or tissues confined appropriately to conform to easily modeled geometries such as a slab or a cylinder. In later years several methods have been developed to model photon propagation through diffuse media with complex boundaries using numerical solutions of the diffusion or transport equation (finite elements or differences) or more recently analytical methods based on the tangent-plane method . While optical examinations performed simultaneously with anatomical imaging modalities such as MRI provide well-defined boundaries, very limited progress has been done so far in extracting full-field (360 degree) boundaries for in-vivo three-dimensional DOT stand-alone imaging. In this paper, we present a desktop multi-spectrum in-vivo 3D DOT system for small animal imaging. This system is augmented with Technest's full-field 3D cameras. The built system has the capability of acquiring 3D object surface profiles in real time and registering 3D boundary with diffuse tomography. Extensive experiments are performed on phantoms and small animals by our collaborators at the Center for Molecular Imaging Research (CMIR) at Massachusetts General Hospital (MGH) and Harvard Medical School. Data has shown successful reconstructed DOT data with improved accuracy.

  20. In vivo 3D analysis of systemic effects after local heavy-ion beam irradiation in an animal model

    PubMed Central

    Nagata, Kento; Hashimoto, Chika; Watanabe-Asaka, Tomomi; Itoh, Kazusa; Yasuda, Takako; Ohta, Kousaku; Oonishi, Hisako; Igarashi, Kento; Suzuki, Michiyo; Funayama, Tomoo; Kobayashi, Yasuhiko; Nishimaki, Toshiyuki; Katsumura, Takafumi; Oota, Hiroki; Ogawa, Motoyuki; Oga, Atsunori; Ikemoto, Kenzo; Itoh, Hiroshi; Kutsuna, Natsumaro; Oda, Shoji; Mitani, Hiroshi

    2016-01-01

    Radiotherapy is widely used in cancer treatment. In addition to inducing effects in the irradiated area, irradiation may induce effects on tissues close to and distant from the irradiated area. Japanese medaka, Oryzias latipes, is a small teleost fish and a model organism for evaluating the environmental effects of radiation. In this study, we applied low-energy carbon-ion (26.7 MeV/u) irradiation to adult medaka to a depth of approximately 2.2 mm from the body surface using an irradiation system at the National Institutes for Quantum and Radiological Science and Technology. We histologically evaluated the systemic alterations induced by irradiation using serial sections of the whole body, and conducted a heart rate analysis. Tissues from the irradiated side showed signs of serious injury that corresponded with the radiation dose. A 3D reconstruction analysis of the kidney sections showed reductions in the kidney volume and blood cell mass along the irradiated area, reflecting the precise localization of the injuries caused by carbon-beam irradiation. Capillary aneurysms were observed in the gill in both ventrally and dorsally irradiated fish, suggesting systemic irradiation effects. The present study provides an in vivo model for further investigation of the effects of irradiation beyond the locally irradiated area. PMID:27345436

  1. Enhanced simultaneous detection of ractopamine and salbutamol--Via electrochemical-facial deposition of MnO2 nanoflowers onto 3D RGO/Ni foam templates.

    PubMed

    Wang, Ming Yan; Zhu, Wei; Ma, Lin; Ma, Juan Juan; Zhang, Dong En; Tong, Zhi Wei; Chen, Jun

    2016-04-15

    In this paper, we report a facile method to successfully fabricate MnO2 nanoflowers loaded onto 3D RGO@nickel foam, showing enhanced biosensing activity due to the improved structural integration of different electrode materials components. When the as-prepared 3D hybrid electrodes were investigated as a binder-free biosensor, two well-defined and separate differential pulse voltammetric peaks for ractopamine (RAC) and salbutamol (SAL) were observed, indicating the simultaneous selective detection of both β-agonists possible. The MnO2/RGO@NF sensor also demonstrated a linear relationship over a wide concentration range of 17 nM to 962 nM (R=0.9997) for RAC and 42 nM to 1463 nM (R=0.9996) for SAL, with the detection limits of 11.6 nM for RAC and 23.0 nM for SAL. In addition, the developed MnO2/RGO@NF sensor was further investigated to detect RAC and SAL in pork samples, showing satisfied comparable results in comparison with analytic results from HPLC.

  2. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    SciTech Connect

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.

  3. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  4. SU-C-BRE-01: 3D Conformal Micro Irradiation Results of Four Treatment Sites for Preclinical Small Animal and Clinical Treatment Plans

    SciTech Connect

    Price, S; Yaddanapudi, S; Rangaraj, D; Izaguirre, E

    2014-06-15

    Purpose: Small animal irradiation can provide preclinical insights necessary for clinical advancement. In order to provide clinically relevant data, these small animal irradiations must be designed such that the treatment methods and results are comparable to clinical protocols, regardless of variations in treatment size and modality. Methods: Small animal treatments for four treatment sites (brain, liver, lung and spine) were investigated, accounting for change in treatment energy and target size. Up to five orthovoltage (300kVp) beams were used in the preclinical treatments, using circular, square, and conformal tungsten apertures, based on the treatment site. Treatments were delivered using the image guided micro irradiator (microIGRT). The plans were delivered to a mouse sized phantom and dose measurements in axial and coronal planes were performed using radiochromic film. The results of the clinical and preclinical protocols were characterized in terms of conformality number, CTV coverage, dose nonuniformity ratio, and organ at risk sparing. Results: Preclinical small animal treatment conformality was within 1–16% of clinical results for all treatment sites. The volume of the CTV receiving 100% of the prescription dose was typically within 10% of clinical values. The dose non-uniformity was consistently higher for preclinical treatments compared to clinical treatments, indicating hot spots in the target. The ratios of the mean dose in the target to the mean dose in an organ at risk were comparable if not better for preclinical versus clinical treatments. Finally, QUANTEC dose constraints were applied and the recommended morbidity limits were satisfied in each small animal treatment site. Conclusion: We have shown that for four treatment sites, preclinical 3D conformal small animal treatments can be clinically comparable if clinical protocols are followed. Using clinical protocols as the standard, preclinical irradiation methods can be altered and iteratively

  5. A Smart Cage With Uniform Wireless Power Distribution in 3D for Enabling Long-Term Experiments With Freely Moving Animals.

    PubMed

    Mirbozorgi, S Abdollah; Bahrami, Hadi; Sawan, Mohamad; Gosselin, Benoit

    2016-04-01

    This paper presents a novel experimental chamber with uniform wireless power distribution in 3D for enabling long-term biomedical experiments with small freely moving animal subjects. The implemented power transmission chamber prototype is based on arrays of parallel resonators and multicoil inductive links, to form a novel and highly efficient wireless power transmission system. The power transmitter unit includes several identical resonators enclosed in a scalable array of overlapping square coils which are connected in parallel to provide uniform power distribution along x and y. Moreover, the proposed chamber uses two arrays of primary resonators, facing each other, and connected in parallel to achieve uniform power distribution along the z axis. Each surface includes 9 overlapped coils connected in parallel and implemented into two layers of FR4 printed circuit board. The chamber features a natural power localization mechanism, which simplifies its implementation and ease its operation by avoiding the need for active detection and control mechanisms. A single power surface based on the proposed approach can provide a power transfer efficiency (PTE) of 69% and a power delivered to the load (PDL) of 120 mW, for a separation distance of 4 cm, whereas the complete chamber prototype provides a uniform PTE of 59% and a PDL of 100 mW in 3D, everywhere inside the chamber with a size of 27×27×16 cm(3). PMID:26011866

  6. A Smart Cage With Uniform Wireless Power Distribution in 3D for Enabling Long-Term Experiments With Freely Moving Animals.

    PubMed

    Mirbozorgi, S Abdollah; Bahrami, Hadi; Sawan, Mohamad; Gosselin, Benoit

    2016-04-01

    This paper presents a novel experimental chamber with uniform wireless power distribution in 3D for enabling long-term biomedical experiments with small freely moving animal subjects. The implemented power transmission chamber prototype is based on arrays of parallel resonators and multicoil inductive links, to form a novel and highly efficient wireless power transmission system. The power transmitter unit includes several identical resonators enclosed in a scalable array of overlapping square coils which are connected in parallel to provide uniform power distribution along x and y. Moreover, the proposed chamber uses two arrays of primary resonators, facing each other, and connected in parallel to achieve uniform power distribution along the z axis. Each surface includes 9 overlapped coils connected in parallel and implemented into two layers of FR4 printed circuit board. The chamber features a natural power localization mechanism, which simplifies its implementation and ease its operation by avoiding the need for active detection and control mechanisms. A single power surface based on the proposed approach can provide a power transfer efficiency (PTE) of 69% and a power delivered to the load (PDL) of 120 mW, for a separation distance of 4 cm, whereas the complete chamber prototype provides a uniform PTE of 59% and a PDL of 100 mW in 3D, everywhere inside the chamber with a size of 27×27×16 cm(3).

  7. The New 3D Printed Left Atrial Appendage Closure with a Novel Holdfast Device: A Pre-Clinical Feasibility Animal Study

    PubMed Central

    Brzeziński, M.; Bury, K.; Dąbrowski, L.; Holak, P.; Sejda, A.; Pawlak, M.; Jagielak, D.; Adamiak, Z.; Rogowski, J.

    2016-01-01

    surrounding tissues. No pericarditis or macroscopic signs of inflammation at the site of the device were found. All pigs were in good condition with normal weight gain and no other clinical symptoms. Conclusion This novel 3D printed left atrial appendage closure technique with a novel holdfast device was proven to be safe and feasible in all pigs. A benign healing process without inflammation and damage to the surrounding structures or evidence of new thrombi formation was observed. Moreover, the uncomplicated survival and full LAA exclusion in all animals demonstrate the efficacy of this novel and relatively cheap device. Further clinical evaluation and implementation studies should be performed to introduce this new technology into clinical practice. PMID:27219618

  8. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm. PMID:21576739

  9. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  10. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  11. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  12. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    PubMed Central

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  13. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  14. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  15. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  16. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  17. Developing and Evaluating of Non-Realistic Three-Dimensional (3d-Nr) and Two-Dimensional (2d) Talking-Head Animation Courseware

    ERIC Educational Resources Information Center

    Hamdan, Mohd Najib; Ali, Ahmad Zamzuri Mohamad

    2015-01-01

    The talking-head animation is an instructional animation capable of improving the communication skills through enhancing the pronunciation skills; whereby a word is pronounced correctly and accurately. This had been proven by several researches, which indicate that learning with interactive animation is much more advantageous than conventional…

  18. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls. PMID:19004687

  19. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  20. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  1. Stolen twin: fascination and curiosity/twin research reports: evolution of sleep length; dental treatment of craniopagus twins; cryopreserved double embryo transfer; gender options in multiple pregnancy/current events: appendectomy in one twin; autistic twin marathon runners; 3D facial recognition; twin biathletes.

    PubMed

    Segal, Nancy L

    2014-02-01

    The story of her allegedly stolen twin brother in Armenia is recounted by a 'singleton twin' living in the United States. The behavioral consequences and societal implications of this loss are considered. This case is followed by twin research reports on the evolution of sleep length, dental treatment of craniopagus conjoined twins, cryopreserved double embryo transfer (DET), and gender options in multiple pregnancy. Current events include the diagnosis of appendectomy in one identical twin, the accomplishments of autistic twin marathon runners, the power of three-dimensional (3D) facial recognition, and the goals of twin biathletes heading to the 2014 Sochi Olympics in Russia. PMID:24418634

  2. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  3. Facial Ringworm (Tinea Faciale)

    MedlinePlus

    ... rash and rashes clinical tools newsletter | contact Share | Ringworm, Facial (Tinea Faciei) Information for adults A A A A ... with scaling along the edge is typical of tinea faciale. Overview Tinea infections are commonly called ringworm ...

  4. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing. PMID:24808080

  5. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  6. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  7. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders.

    PubMed

    Chen, Chien-Hsu; Lee, I-Jui; Lin, Ling-Yi

    2014-11-01

    Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task.

  8. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the

  9. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851

  10. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  11. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  12. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  13. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  14. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  15. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  16. Facial Expression Biometrics Using Statistical Shape Models

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Matuszewski, Bogdan J.; Shark, Lik-Kwan; Ait-Boudaoud, Djamel

    2009-12-01

    This paper describes a novel method for representing different facial expressions based on the shape space vector (SSV) of the statistical shape model (SSM) built from 3D facial data. The method relies only on the 3D shape, with texture information not being used in any part of the algorithm, that makes it inherently invariant to changes in the background, illumination, and to some extent viewing angle variations. To evaluate the proposed method, two comprehensive 3D facial data sets have been used for the testing. The experimental results show that the SSV not only controls the shape variations but also captures the expressive characteristic of the faces and can be used as a significant feature for facial expression recognition. Finally the paper suggests improvements of the SSV discriminatory characteristics by using 3D facial sequences rather than 3D stills.

  17. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  18. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  19. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  20. SU-C-213-07: Fabrication and Testing of a 3D-Printed Small Animal Rectal Cooling Device to Evaluate Local Hypothermia as a Radioprotector During Prostate SBRT

    SciTech Connect

    Hrycushko, B; Chopra, R; Futch, C; Bing, C; Wodzak, M; Stojadinovic, S; Jiang, S; Medin, P

    2015-06-15

    Purpose: The protective effects of induced or even accidental hypothermia on the human body are widespread with several medical uses currently under active research. In vitro experiments using human cell lines have shown hypothermia provides a radioprotective effect that becomes more pronounced at large, single-fraction doses common to SBRT treatments. Relevant to prostate SBRT, this work details the fabrication and testing of a 3D-printed cooling device to facilitate the investigation of the radioprotective effect of local hypothermia on the rat rectum. Methods: A 3cm long, two-channel rectal cooling device was designed in SOLIDWORKS CAD for 3D printing. The water intake nozzle is connected to a 1mm diameter brass pipe from which water flows and circulates back around to the exit nozzle. Both nozzles are connected by plastic tubing to a water chiller pump. Following leak-proof testing, fiber optic temperature probes were used to evaluate the temperature over time when placed adjacent to the cooling device within a rat rectum. MRI thermometry characterized the relative temperature distribution in concentric ROIs surrounding the probe. CBCT images from a small-animal irradiator were evaluated for imaging artifacts which could affect Monte Carlo dose calculations during treatment planning. Results: The rectal temperature adjacent to the cooling device decreased from body temperature (37°C) to 15°C in 10–20 minutes from device insertion. Rectal temperature was maintained at 15±3°C during active cooling. MRI thermometry tests revealed a steep temperature gradient with increasing distance from the cooling device, with the desired temperature range maintained within the surrounding few millimeters. Conclusion: A 3D printed rectal cooling device was fabricated for the purpose of inducing local hypothermia in rat rectums. Rectal cooling capabilities were characterized in-vivo to facilitate an investigation of the radioprotective effect of hypothermia for late rectal

  1. Effects on animal wellbeing and sample quality of 2 techniques for collecting blood from the facial vein of mice.

    PubMed

    Francisco, Cassie C; Howarth, Gordon S; Whittaker, Alexandra L

    2015-01-01

    When sampling blood from mice, several different techniques can be used, with retroorbital sinus sampling traditionally being the most common. Given the severe tissue trauma caused by retroorbital sampling, alternative methods such as the facial vein route have been developed. The aim of this study was to evaluate 2 techniques for facial vein bleeding in conscious mice to ascertain whether differences in clinical outcomes, practicability of sample collection, and hematologic parameters were apparent. Blood samples were obtained from the facial vein of 40 BALB/c mice by using either a 21-gauge needle or a lancet. Subsequently, the protocol was repeated with isoflurane-anesthetized mice sampled by using the lancet method (n = 20). Behavior immediately after sampling was observed, and sample quantity, sampling time, and time until bleeding ceased were measured. Clinical pathology data and hematoma diameter at necropsy were analyzed also. The mean sample quantity collected (approximately 0.2 mL) was comparable among methods, but sampling was much more rapid when mice were anesthetized by using isoflurane. The only other noteworthy finding was a significantly reduced number of platelets in samples from anesthetized mice. Adverse, ongoing clinical signs were rare regardless of the method used. The results revealed no significant differences in welfare implications or blood sample quality among the methods or between conscious and anesthetized mice. Therefore, any of the methods we evaluated for obtaining blood samples from the facial vein are appropriate for use in research studies.

  2. Effects on Animal Wellbeing and Sample Quality of 2 Techniques for Collecting Blood from the Facial Vein of Mice

    PubMed Central

    Francisco, Cassie C; Howarth, Gordon S; Whittaker, Alexandra L

    2015-01-01

    When sampling blood from mice, several different techniques can be used, with retroorbital sinus sampling traditionally being the most common. Given the severe tissue trauma caused by retroorbital sampling, alternative methods such as the facial vein route have been developed. The aim of this study was to evaluate 2 techniques for facial vein bleeding in conscious mice to ascertain whether differences in clinical outcomes, practicability of sample collection, and hematologic parameters were apparent. Blood samples were obtained from the facial vein of 40 BALB/c mice by using either a 21-gauge needle or a lancet. Subsequently, the protocol was repeated with isoflurane-anesthetized mice sampled by using the lancet method (n = 20). Behavior immediately after sampling was observed, and sample quantity, sampling time, and time until bleeding ceased were measured. Clinical pathology data and hematoma diameter at necropsy were analyzed also. The mean sample quantity collected (approximately 0.2 mL) was comparable among methods, but sampling was much more rapid when mice were anesthetized by using isoflurane. The only other noteworthy finding was a significantly reduced number of platelets in samples from anesthetized mice. Adverse, ongoing clinical signs were rare regardless of the method used. The results revealed no significant differences in welfare implications or blood sample quality among the methods or between conscious and anesthetized mice. Therefore, any of the methods we evaluated for obtaining blood samples from the facial vein are appropriate for use in research studies. PMID:25651095

  3. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  4. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  5. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  6. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  7. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  8. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  9. The axillary approach to raising the latissimus dorsi free flap for facial re-animation: a descriptive surgical technique.

    PubMed

    Leckenby, Jonathan; Butler, Daniel; Grobbelaar, Adriaan

    2015-01-01

    The latissimus dorsi flap is popular due to the versatile nature of its applications. When used as a pedicled flap it provides a robust solution when soft tissue coverage is required following breast, thoracic and head and neck surgery. Its utilization as a free flap is extensive due to the muscle's size, constant anatomy, large caliber of the pedicle and the fact it can be used for functional muscle transfers. In facial palsy it provides the surgeon with a long neurovascular pedicle that is invaluable in situations where commonly used facial vessels are not available, in congenital cases or where previous free functional muscle transfers have been attempted, or patients where a one-stage procedure is indicated and a long nerve is required to reach the contra-lateral side. Although some facial palsy surgeons use the trans-axillary approach, an operative guide of raising the flap by this method has not been provided. A clear guide of raising the flap with the patient in the supine position is described in detail and offers the benefits of reducing the risk of potential brachial plexus injury and allows two surgical teams to work synchronously to reduce operative time.

  10. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  11. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer.

  12. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer. PMID:24808129

  13. Enhanced External Counterpulsation Treatment May Intervene The Advanced Atherosclerotic Plaque Progression by Inducing The Variations of Mechanical Factors: A 3D FSI Study Based on in vivo Animal Experiment.

    PubMed

    Du, Jianhang; Wang, Liang

    2015-12-01

    Growing evidences suggest that long-term enhanced external counter-pulsation (EECP) treatment can inhibit the initiation of atherosclerotic lesion by improving the hemodynamic environment in aortas. However, whether this kind procedure will intervene the progression of advanced atherosclerotic plaque remains elusive and causes great concern in its clinical application presently. In the current paper, a pilot study combining animal experiment and numerical simulation was conducted to investigate the acute mechanical stress variations during EECP intervention, and then to assess the possible chronic effects. An experimentally induced hypercholesterolemic porcine model was developed and the basic hemodynamic measurement was performed in vivo before and during EECP treatment. Meanwhile, A 3D fluid-structure interaction (FSI) model of blood vessel with symmetric local stenosis was developed for the numerical calculation of some important mechanical factors. The results show that EECP augmented 12.21% of the plaque wall stress (PWS), 57.72% of the time average wall shear stress (AWSS) and 43.67% of the non-dimensional wall shear stress gradient (WSSGnd) at throat site of the stenosis. We suggest that long-term EECP treatment may intervene the advanced plaque progression by inducing the significant variations of some important mechanical factors, but its proper effects will need a further research combined follow-up observation in clinic. PMID:27263260

  14. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  16. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  17. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy. PMID:25296404

  18. Ostrich eggshell as a bone substitute: a preliminary report of its biological behaviour in animals--a possibility in facial reconstructive surgery.

    PubMed

    Dupoirieux, L

    1999-12-01

    The aim of this study was to assess the biological behaviour of an implant of ostrich eggshell in various animal models of facial bone reconstruction. The implant was first bioassayed in a rat muscle pouch (n=10), and then tested as an interpositional graft in rat (n=10) and rabbit (n=5) cranial defects. It was finally used as an onlay graft on rabbit mandibles (n=5). Animals were killed after two months in the bioassay, three months in the interpositional model, and six months in the onlay model. The specimens were studied by contact radiography and standard histological techniques. All animals showed normal wound-healing. In the bioassay, the implants produced only a minimal inflammatory reaction. In the interpositional model, the implants maintained a good contour, but there was no sign of graft-remodelling. In the onlay model, the grafts were stable and partly osteointegrated. The onlay graft model gave the most promising results. Because ostrich eggshell is inexpensive and has good mechanical properties, it deserves further study. Long-term studies will clarify its possible role in maxillofacial surgery. PMID:10687909

  19. Emerging Applications of Bedside 3D Printing in Plastic Surgery.

    PubMed

    Chae, Michael P; Rozen, Warren M; McMenamin, Paul G; Findlay, Michael W; Spychal, Robert T; Hunter-Smith, David J

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  20. Emerging Applications of Bedside 3D Printing in Plastic Surgery

    PubMed Central

    Chae, Michael P.; Rozen, Warren M.; McMenamin, Paul G.; Findlay, Michael W.; Spychal, Robert T.; Hunter-Smith, David J.

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  1. IR Fringe Projection for 3D Face Recognition

    NASA Astrophysics Data System (ADS)

    Spagnolo, Giuseppe Schirripa; Cozzella, Lorenzo; Simonetti, Carla

    2010-04-01

    Facial recognitions of people can be used for the identification of individuals, or can serve as verification e.g. for access controls. The process requires that the facial data is captured and then compared with stored reference data. Different from traditional methods which use 2D images to recognize human faces, this article shows a known shape extraction methodology applied to the extraction of 3D human faces conjugated with a non conventional optical system able to work in ``invisible'' way. The proposed method is experimentally simple, and it has a low-cost set-up.

  2. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  3. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  4. Extra dimensions: 3D in PDF documentation

    SciTech Connect

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  5. Extra dimensions: 3D in PDF documentation

    DOE PAGES

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  6. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  7. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  8. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  9. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  10. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  11. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  12. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  13. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  14. Dissection of C. elegans behavioral genetics in 3-D environments.

    PubMed

    Kwon, Namseop; Hwang, Ara B; You, Young-Jai; V Lee, Seung-Jae; Je, Jung Ho

    2015-01-01

    The nematode Caenorhabditis elegans is a widely used model for genetic dissection of animal behaviors. Despite extensive technical advances in imaging methods, it remains challenging to visualize and quantify C. elegans behaviors in three-dimensional (3-D) natural environments. Here we developed an innovative 3-D imaging method that enables quantification of C. elegans behavior in 3-D environments. Furthermore, for the first time, we characterized 3-D-specific behavioral phenotypes of mutant worms that have defects in head movement or mechanosensation. This approach allowed us to reveal previously unknown functions of genes in behavioral regulation. We expect that our 3-D imaging method will facilitate new investigations into genetic basis of animal behaviors in natural 3-D environments.

  15. Dissection of C. elegans behavioral genetics in 3-D environments

    PubMed Central

    Kwon, Namseop; Hwang, Ara B.; You, Young-Jai; V. Lee, Seung-Jae; Ho Je, Jung

    2015-01-01

    The nematode Caenorhabditis elegans is a widely used model for genetic dissection of animal behaviors. Despite extensive technical advances in imaging methods, it remains challenging to visualize and quantify C. elegans behaviors in three-dimensional (3-D) natural environments. Here we developed an innovative 3-D imaging method that enables quantification of C. elegans behavior in 3-D environments. Furthermore, for the first time, we characterized 3-D-specific behavioral phenotypes of mutant worms that have defects in head movement or mechanosensation. This approach allowed us to reveal previously unknown functions of genes in behavioral regulation. We expect that our 3-D imaging method will facilitate new investigations into genetic basis of animal behaviors in natural 3-D environments. PMID:25955271

  16. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. 3D Modeling Techniques for Print and Digital Media

    NASA Astrophysics Data System (ADS)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  19. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates

  1. Facial Transplantation.

    PubMed

    Russo, Jack E; Genden, Eric M

    2016-08-01

    Reconstruction of severe facial deformities poses a unique surgical challenge: restoring the aesthetic form and function of the face. Facial transplantation has emerged over the last decade as an option for reconstruction of these defects in carefully selected patients. As the world experience with facial transplantation grows, debate remains regarding whether such a highly technical, resource-intensive procedure is warranted, all to improve quality of life but not necessarily prolong it. This article reviews the current state of facial transplantation with focus on the current controversies and challenges, with particular attention to issues of technique, immunology, and ethics. PMID:27400850

  2. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  3. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  4. Facial Scar Revision: Understanding Facial Scar Treatment

    MedlinePlus

    ... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...

  5. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  6. A 3D Geometry Model Search Engine to Support Learning

    ERIC Educational Resources Information Center

    Tam, Gary K. L.; Lau, Rynson W. H.; Zhao, Jianmin

    2009-01-01

    Due to the popularity of 3D graphics in animation and games, usage of 3D geometry deformable models increases dramatically. Despite their growing importance, these models are difficult and time consuming to build. A distance learning system for the construction of these models could greatly facilitate students to learn and practice at different…

  7. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. If ARCGRAPH (ARC-12350) is installed on the user's VAX, the VMS/DISSPLA version of PLOT3D can also be used to create files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program capable of animating and recording images on film. The version 3.6b+ VMS/DISSPLA implementations of PLOT3D (ARC-12777) and PLOT3D/TURB3D (ARC-12781) were developed for use on VAX computers running VMS Version 5.0 and DISSPLA Version 11.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in DEC VAX BACKUP format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC12782); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a

  8. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. If ARCGRAPH (ARC-12350) is installed on the user's VAX, the VMS/DISSPLA version of PLOT3D can also be used to create files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program capable of animating and recording images on film. The version 3.6b+ VMS/DISSPLA implementations of PLOT3D (ARC-12777) and PLOT3D/TURB3D (ARC-12781) were developed for use on VAX computers running VMS Version 5.0 and DISSPLA Version 11.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in DEC VAX BACKUP format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC12782); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a

  9. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are

  10. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are

  11. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  12. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  13. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  14. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  15. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  16. Facial fractures.

    PubMed Central

    Carr, M. M.; Freiberg, A.; Martin, R. D.

    1994-01-01

    Emergency room physicians frequently see facial fractures that can have serious consequences for patients if mismanaged. This article reviews the signs, symptoms, imaging techniques, and general modes of treatment of common facial fractures. It focuses on fractures of the mandible, zygomaticomaxillary region, orbital floor, and nose. Images p520-a p522-a PMID:8199509

  17. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  18. A 3-D Look at Post-Tropical Cyclone Hermine

    NASA Video Gallery

    This 3-D flyby animation of GPM imagery shows Post-Tropical Storm Hermine on Sept. 6. Rain was falling at a rate of over 1.1 inches (27 mm) per hour between the Atlantic coast and Hermine's center ...

  19. NASA Provides a 3-D Look at Typhoon Chaba

    NASA Video Gallery

    This 3-D flyby animation of Typhoon Chaba was created using data from NASA/JAXA's Global Precipitation Measurement mission or GPM core satellite. On Oct. 2, the GPM satellite saw some precipitation...

  20. A NASA 3-D Flyby of Hurricane Seymour

    NASA Video Gallery

    This 3-D Flyby animation from data gathered by the GPM core observatory satellite is from its view of Hurricane Seymour on Oct. 25 at 7:46 am PDT (1646 UTC). GPM showed rain falling at the extreme ...

  1. NASA Sees Tropical Storm Malakas in 3-D

    NASA Video Gallery

    This animated 3-D flyby of Tropical Storm Malakas was created by radar data from the GPM core satellite. On Sept. 13 at 0111 UTC GPM's instruments showed that Malakas contained exceptionally heavy ...

  2. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    Through geovisualisation we explore spatial data, we analyse it towards a specific questions, we synthesise results, and we present and communicate them to a specific audience (MacEachren & Kraak 1997). After centuries of paper maps, the means to represent and visualise our physical environment and its abstract qualities have changed dramatically since the 1990s - and accordingly the methods how to use geovisualisation in teaching. Whereas some people might still consider the traditional classroom as ideal setting for teaching and learning geographic relationships and its mapping, we used a 3D CAVE (computer-animated virtual environment) as environment for a problem-oriented learning project called "GEOSimulator". Focussing on this project, we empirically investigated, if such a technological advance like the CAVE make 3D visualisation, including 3D geovisualisation, not only an important tool for businesses (Abulrub et al. 2012) and for the public (Wissen et al. 2008), but also for educational purposes, for which it had hardly been used yet. The 3D CAVE is a three-sided visualisation platform, that allows for immersive and stereoscopic visualisation of observed and simulated spatial data. We examined the benefits of immersive 3D visualisation for geographic research and education and synthesized three fundamental technology-based visual aspects: First, the conception and comprehension of space and location does not need to be generated, but is instantaneously and intuitively present through stereoscopy. Second, optical immersion into virtual reality strengthens this spatial perception which is in particular important for complex 3D geometries. And third, a significant benefit is interactivity, which is enhanced through immersion and allows for multi-discursive and dynamic data exploration and knowledge transfer. Based on our problem-oriented learning project, which concentrates on a case study on flood risk management at the Wilde Weisseritz in Germany, a river

  3. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  4. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  5. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  6. Microfluidic 3D models of cancer

    PubMed Central

    Sung, Kyung Eun; Beebe, David J.

    2014-01-01

    Despite advances in medicine and biomedical sciences, cancer still remains a major health issue. Complex interactions between tumors and their microenvironment contribute to tumor initiation and progression and also contribute to the development of drug resistant tumor cell populations. The complexity and heterogeneity of tumors and their microenvironment make it challenging to both study and treat cancer. Traditional animal cancer models and in vitro cancer models are limited in their ability to recapitulate human structures and functions, thus hindering the identification of appropriate drug targets and therapeutic strategies. The development and application of microfluidic 3D cancer models has the potential to overcome some of the limitations inherent to traditional models. This review summarizes the progress in microfluidic 3D cancer models, their benefits, and their broad application to basic cancer biology, drug screening, and drug discovery. PMID:25017040

  7. Microfluidic 3D models of cancer.

    PubMed

    Sung, Kyung Eun; Beebe, David J

    2014-12-15

    Despite advances in medicine and biomedical sciences, cancer still remains a major health issue. Complex interactions between tumors and their microenvironment contribute to tumor initiation and progression and also contribute to the development of drug resistant tumor cell populations. The complexity and heterogeneity of tumors and their microenvironment make it challenging to both study and treat cancer. Traditional animal cancer models and in vitro cancer models are limited in their ability to recapitulate human structures and functions, thus hindering the identification of appropriate drug targets and therapeutic strategies. The development and application of microfluidic 3D cancer models have the potential to overcome some of the limitations inherent to traditional models. This review summarizes the progress in microfluidic 3D cancer models, their benefits, and their broad application to basic cancer biology, drug screening, and drug discovery.

  8. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  9. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  10. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  11. Identifying chromosomal selection-sweep regions in facial eczema selection-line animals using an ovine 50K-SNP array.

    PubMed

    Phua, S H; Brauning, R; Baird, H J; Dodds, K G

    2014-04-01

    Facial eczema (FE) is a hepato-mycotoxicosis found mainly in New Zealand sheep and cattle. When genetics was found to be a factor in FE susceptibility, resistant and susceptible selection lines of Romney sheep were established to enable further investigations of this disease trait. Using the Illumina OvineSNP50 BeadChip, we conducted a selection-sweep experiment on these FE genetic lines. Two analytical methods were used to detect selection signals, namely the Peddrift test (Dodds & McEwan, 1997) and fixation index FST (Weir & Hill, 2002). Of 50 975 single nucleotide polymorphism (SNP) markers tested, there were three that showed highly significant allele frequency differences between the resistant and susceptible animals (Peddrift nominal P < 0.000001). These SNP loci are located on chromosomes OAR1, OAR11 and OAR12 that coincide precisely with the three highest genomic FST peaks. In addition, there are nine less significant Peddrift SNPs (nominal P ≤ 0.000009) on OAR6 (n = 2), OAR9 (n = 2), OAR12, OAR19 (n = 2), OAR24 and OAR26. In smoothed FST (five-SNP moving average) plots, the five most prominent peaks are on OAR1, OAR6, OAR7, OAR13 and OAR19. Although these smoothed FST peaks do not coincide with the three most significant Peddrift SNP loci, two (on OAR6 and OAR19) overlap with the set of less significant Peddrift SNPs above. Of these 12 Peddrift SNPs and five smoothed FST regions, none is close to the FE candidate genes catalase and ABCG2; however, two on OAR1 and one on OAR13 fall within suggestive quantitative trait locus regions identified in a previous genome screen experiment. The present studies indicated that there are at least eight genomic regions that underwent a selection sweep in the FE lines. PMID:24521158

  12. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia.

  13. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  14. Three-dimensional assessment of facial asymmetry: A systematic review

    PubMed Central

    Akhil, Gopi; Senthil Kumar, Kullampalayam Palanisamy; Raja, Subramani; Janardhanan, Kumaresan

    2015-01-01

    For patients with facial asymmetry, complete and precise diagnosis, and surgical treatments to correct the underlying cause of the asymmetry are significant. Conventional diagnostic radiographs (submento-vertex projections, posteroanterior radiography) have limitations in asymmetry diagnosis due to two-dimensional assessments of three-dimensional (3D) images. The advent of 3D images has greatly reduced the magnification and projection errors that are common in conventional radiographs making it as a precise diagnostic aid for assessment of facial asymmetry. Thus, this article attempts to review the newly introduced 3D tools in the diagnosis of more complex facial asymmetries. PMID:26538893

  15. Facial trauma

    MedlinePlus

    Kellman RM. Maxillofacial trauma. In: Flint PW, Haughey BH, Lund LJ, et al, eds. Cummings Otolaryngology: Head & Neck Surgery . ... Facial trauma. In: Marx JA, Hockberger RS, Walls RM, et al, eds. Rosen's Emergency Medicine: Concepts and ...

  16. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery.

  17. Facial paralysis

    MedlinePlus

    ... headaches, seizures, or hearing loss. In newborns, facial paralysis may be caused by trauma during birth. Other causes include: Infection of the brain or surrounding tissues Lyme disease Sarcoidosis Tumor that ...

  18. Facial tics

    MedlinePlus

    ... 2010;33:641-655. Jankovic J, Lang AE. Movement disorders. In: Daroff RB, Fenichel GM, Jankovic J, Mazziotta ... Malhotra R. Review and update of involuntary facial movement disorders presenting in the ophthalmological setting. Surv Ophthalmol. Ryan ...

  19. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  20. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  1. Robust facial landmark detection for three-dimensional face segmentation and alignment

    NASA Astrophysics Data System (ADS)

    Wu, Hai Shan; Chen, Yan Qiu

    2010-07-01

    Three-dimensional human faces have been applied in many fields, such as face animation, identity recognition, and facial plastic surgery. Segmenting and aligning 3-D faces from raw scanned data is the first vital step toward making these applications successful. However, the existence of artifacts, facial expressions, and noises poses many challenges to this problem. We propose an automatic and robust method to segment and align 3-D face surfaces by locating the nose tip and nose ridge. Taking a raw scanned surface as input, a novel feature-based moment analysis on scale spaces is presented to locate the nose tip accurately and robustly, which is then used to crop the face region. A technique called the geodesic Euclidean ratio is then developed to find the nose ridge. Each face is aligned based on the locations of nose tip and nose ridge. The proposed method is not only invariant to translations and rotations, but also robust in the presence of facial expressions and artifacts such as hair, clothing, other body parts, etc. Experimental results on two large 3-D face databases demonstrate the accuracy and robustness of the proposed method.

  2. Animator

    ERIC Educational Resources Information Center

    Tech Directions, 2008

    2008-01-01

    Art and animation work is the most significant part of electronic game development, but is also found in television commercials, computer programs, the Internet, comic books, and in just about every visual media imaginable. It is the part of the project that makes an abstract design idea concrete and visible. Animators create the motion of life in…

  3. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  4. 3-D video techniques in endoscopic surgery.

    PubMed

    Becker, H; Melzer, A; Schurr, M O; Buess, G

    1993-02-01

    Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany. PMID:8050009

  5. MO-A-9A-01: Innovation in Medical Physics Practice: 3D Printing Applications

    SciTech Connect

    Ehler, E; Perks, J; Rasmussen, K; Bakic, P

    2014-06-15

    3D printing, also called additive manufacturing, has great potential to advance the field of medicine. Many medical uses have been exhibited from facial reconstruction to the repair of pulmonary obstructions. The strength of 3D printing is to quickly convert a 3D computer model into a physical object. Medical use of 3D models is already ubiquitous with technologies such as computed tomography and magnetic resonance imaging. Thus tailoring 3D printing technology to medical functions has the potential to impact patient care. This session will discuss applications to the field of Medical Physics. Topics discussed will include introduction to 3D printing methods as well as examples of real-world uses of 3D printing spanning clinical and research practice in diagnostic imaging and radiation therapy. The session will also compare 3D printing to other manufacturing processes and discuss a variety of uses of 3D printing technology outside the field of Medical Physics. Learning Objectives: Understand the technologies available for 3D Printing Understand methods to generate 3D models Identify the benefits and drawbacks to rapid prototyping / 3D Printing Understand the potential issues related to clinical use of 3D Printing.

  6. 3D reconstruction on CBCT in the cystic pathology of the jaws

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    The paper presents the image acquisition of Cone Beam Computer Tomography scans of human facial bones and their processing in order to obtain a 3D reconstruction model of the skull. The reconstructed model provides useful data to the physician in situations of maxillary cystic pathology but more important is the data about the relationship of the maxillary cyst with the surrounding anatomical elements. Using the B-splines a 3D volume model of the human facial bones can be achieved. This model can be exported in any CAD system, resulting a virtual model witch can be used in FEM analysis.

  7. Facial attractiveness.

    PubMed

    Little, Anthony C

    2014-11-01

    Facial attractiveness has important social consequences. Despite a widespread belief that beauty cannot be defined, in fact, there is considerable agreement across individuals and cultures on what is found attractive. By considering that attraction and mate choice are critical components of evolutionary selection, we can better understand the importance of beauty. There are many traits that are linked to facial attractiveness in humans and each may in some way impart benefits to individuals who act on their preferences. If a trait is reliably associated with some benefit to the perceiver, then we would expect individuals in a population to find that trait attractive. Such an approach has highlighted face traits such as age, health, symmetry, and averageness, which are proposed to be associated with benefits and so associated with facial attractiveness. This view may postulate that some traits will be universally attractive; however, this does not preclude variation. Indeed, it would be surprising if there existed a template of a perfect face that was not affected by experience, environment, context, or the specific needs of an individual. Research on facial attractiveness has documented how various face traits are associated with attractiveness and various factors that impact on an individual's judgments of facial attractiveness. Overall, facial attractiveness is complex, both in the number of traits that determine attraction and in the large number of factors that can alter attraction to particular faces. A fuller understanding of facial beauty will come with an understanding of how these various factors interact with each other. WIREs Cogn Sci 2014, 5:621-634. doi: 10.1002/wcs.1316 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26308869

  8. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  9. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  10. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  11. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  12. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  13. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  14. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  15. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  16. DYNA3D. Explicit 3-d Hydrodynamic FEM Program

    SciTech Connect

    Whirley, R.G.; Englemann, B.E. )

    1993-11-30

    DYNA3D is an explicit, three-dimensional, finite element program for analyzing the large deformation dynamic response of inelastic solids and structures. DYNA3D contains 30 material models and 10 equations of state (EOS) to cover a wide range of material behavior. The material models implemented are: elastic, orthotropic elastic, kinematic/isotropic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, Blatz-Ko rubber, high explosive burn, hydrodynamic without deviatoric stresses, elastoplastic hydrodynamic, temperature-dependent elastoplastic, isotropic elastoplastic, isotropic elastoplastic with failure, soil and crushable foam with failure, Johnson/Cook plasticity model, pseudo TENSOR geological model, elastoplastic with fracture, power law isotropic plasticity, strain rate dependent plasticity, rigid, thermal orthotropic, composite damage model, thermal orthotropic with 12 curves, piecewise linear isotropic plasticity, inviscid two invariant geologic cap, orthotropic crushable model, Moonsy-Rivlin rubber, resultant plasticity, closed form update shell plasticity, and Frazer-Nash rubber model. The hydrodynamic material models determine only the deviatoric stresses. Pressure is determined by one of 10 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, tabulated, and TENSOR pore collapse. DYNA3D generates three binary output databases. One contains information for complete states at infrequent intervals; 50 to 100 states is typical. The second contains information for a subset of nodes and elements at frequent intervals; 1,000 to 10,000 states is typical. The last contains interface data for contact surfaces.

  17. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  18. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  19. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies.

  20. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  1. Facial blindsight

    PubMed Central

    Solcà, Marco; Guggisberg, Adrian G.; Schnider, Armin; Leemann, Béatrice

    2015-01-01

    Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex. PMID:26483655

  2. Facial blindsight.

    PubMed

    Solcà, Marco; Guggisberg, Adrian G; Schnider, Armin; Leemann, Béatrice

    2015-01-01

    Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people's categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex. PMID:26483655

  3. Soft tissue facial angles in Down's syndrome subjects: a three-dimensional non-invasive study.

    PubMed

    Ferrario, Virgilio F; Dellavia, Claudia; Serrao, Graziano; Sforza, Chiarella

    2005-08-01

    The aim of the present study was to obtain quantitative information concerning the three-dimensional (3D) arrangement of the facial soft tissues of subjects with Down's syndrome. The 3D co-ordinates of 50 soft tissue facial landmarks were recorded by an electromechanical digitizer in 17 male and 11 female subjects with Down's syndrome aged 12-45 years, and in 429 healthy individuals of the same age, ethnicity and gender. From the landmark co-ordinates, geometric calculations were obtained of several 3D facial angles: facial convexity in the horizontal plane (upper facial convexity, mid facial convexity including the nose, and lower facial convexity), mandibular corpus convexity in the horizontal plane, facial convexity including the nose, facial convexity excluding the nose, interlabial angle, nasolabial angle, angle of nasal convexity, left and right soft tissue gonial angles. Data were compared with that collected for the normal subjects by computing the z-scores. Facial convexity in the horizontal plane (both in the upper and mid facial third), facial convexity in the sagittal plane and the angle of nasal convexity were significantly (P < 0.05) increased (flatter) in subjects with Down's syndrome than in the normal controls. Both left and right soft tissue gonial angles were significantly reduced (more acute) in the Down's syndrome subjects. Subjects with Down's syndrome had a more hypoplastic facial middle third with reduced nasal protrusion, and a reduced lower facial third (mandible) than reference, normal subjects.

  4. The Use of Genetic Programming for Learning 3D Craniofacial Shape Quantifications.

    PubMed

    Atmosukarto, Indriyati; Shapiro, Linda G; Heike, Carrie

    2010-01-01

    Craniofacial disorders commonly result in various head shape dysmorphologies. The goal of this work is to quantify the various 3D shape variations that manifest in the different facial abnormalities in individuals with a craniofacial disorder called 22q11.2 Deletion Syndrome. Genetic programming (GP) is used to learn the different 3D shape quantifications. Experimental results show that the GP method achieves a higher classification rate than those of human experts and existing computer algorithms [1], [2].

  5. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  6. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  7. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  8. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  9. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  10. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  11. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  12. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  13. Gravitation in 3D Spacetime

    NASA Astrophysics Data System (ADS)

    Laubenstein, John; Cockream, Kandi

    2009-05-01

    3D spacetime was developed by the IWPD Scale Metrics (SM) team using a coordinate system that translates n dimensions to n-1. 4-vectors are expressed in 3D along with a scaling factor representing time. Time is not orthogonal to the three spatial dimensions, but rather in alignment with an object's axis-of-motion. We have defined this effect as the object's ``orientation'' (X). The SM orientation (X) is equivalent to the orientation of the 4-velocity vector positioned tangent to its worldline, where X-1=θ+1 and θ is the angle of the 4-vector relative to the axis-of -motion. Both 4-vectors and SM appear to represent valid conceptualizations of the relationship between space and time. Why entertain SM? Scale Metrics gravity is quantized and may suggest a path for the full unification of gravitation with quantum theory. SM has been tested against current observation and is in agreement with the age of the universe, suggests a physical relationship between dark energy and dark matter, is in agreement with the accelerating expansion rate of the universe, contributes to the understanding of the fine-structure constant and provides a physical explanation of relativistic effects.

  14. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  15. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  16. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  17. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  18. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  19. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  20. Allometry of facial mobility in anthropoid primates: implications for the evolution of facial expression.

    PubMed

    Dobson, Seth D

    2009-01-01

    Body size may be an important factor influencing the evolution of facial expression in anthropoid primates due to allometric constraints on the perception of facial movements. Given this hypothesis, I tested the prediction that observed facial mobility is positively correlated with body size in a comparative sample of nonhuman anthropoids. Facial mobility, or the variety of facial movements a species can produce, was estimated using a novel application of the Facial Action Coding System (FACS). I used FACS to estimate facial mobility in 12 nonhuman anthropoid species, based on video recordings of facial activity in zoo animals. Body mass data were taken from the literature. I used phylogenetic generalized least squares (PGLS) to perform a multiple regression analysis with facial mobility as the dependent variable and two independent variables: log body mass and dummy-coded infraorder. Together, body mass and infraorder explain 92% of the variance in facial mobility. However, the partial effect of body mass is much stronger than for infraorder. The results of my study suggest that allometry is an important constraint on the evolution of facial mobility, which may limit the complexity of facial expression in smaller species. More work is needed to clarify the perceptual bases of this allometric pattern.

  1. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  2. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  3. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  4. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  5. Conducting Polymer 3D Microelectrodes

    PubMed Central

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi; Castillo-León, Jaime; Emnéus, Jenny; Svendsen, Winnie E.

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements. PMID:22163508

  6. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  7. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  8. Learning deformation model for expression-robust 3D face recognition

    NASA Astrophysics Data System (ADS)

    Guo, Zhe; Liu, Shu; Wang, Yi; Lei, Tao

    2015-12-01

    Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.

  9. Automatic Generation of 3D Caricatures Based on Artistic Deformation Styles.

    PubMed

    Clarke, Lyndsey; Chen, Min; Mora, Benjamin

    2011-06-01

    Caricatures are a form of humorous visual art, usually created by skilled artists for the intention of amusement and entertainment. In this paper, we present a novel approach for automatic generation of digital caricatures from facial photographs, which capture artistic deformation styles from hand-drawn caricatures. We introduced a pseudo stress-strain model to encode the parameters of an artistic deformation style using "virtual" physical and material properties. We have also developed a software system for performing the caricaturistic deformation in 3D which eliminates the undesirable artifacts in 2D caricaturization. We employed a Multilevel Free-Form Deformation (MFFD) technique to optimize a 3D head model reconstructed from an input facial photograph, and for controlling the caricaturistic deformation. Our results demonstrated the effectiveness and usability of the proposed approach, which allows ordinary users to apply the captured and stored deformation styles to a variety of facial photographs.

  10. An overview of 3D software visualization.

    PubMed

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions. PMID:19008558

  11. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  12. Facial attractiveness.

    PubMed

    Thornhill; Gangestad

    1999-12-01

    Humans in societies around the world discriminate between potential mates on the basis of attractiveness in ways that can dramatically affect their lives. From an evolutionary perspective, a reasonable working hypothesis is that the psychological mechanisms underlying attractiveness judgments are adaptations that have evolved in the service of choosing a mate so as to increase gene propagation throughout evolutionary history. The main hypothesis that has directed evolutionary psychology research into facial attractiveness is that these judgments reflect information about what can be broadly defined as an individual's health. This has been investigated by examining whether attractiveness judgments show special design for detecting cues that allow us to make assessments of overall phenotypic condition. This review examines the three major lines of research that have been pursued in order to answer the question of whether attractiveness reflects non-obvious indicators of phenotypic condition. These are studies that have examined facial symmetry, averageness, and secondary sex characteristics as hormone markers. PMID:10562724

  13. Towards Single Cell Traction Microscopy within 3D Collagen Matrices

    PubMed Central

    Hall, Matthew S.; Long, Rong; Feng, Xinzeng; Huang, YuLing; Hui, Chung-Yuen; Wu, Mingming

    2013-01-01

    Mechanical interaction between the cell and its extracellular matrix (ECM) regulates cellular behaviors, including proliferation, differentiation, adhesion, and migration. Cells require the three dimensional (3D) architectural support of the ECM to perform physiologically realistic functions. However, current understanding of cell-ECM and cell-cell mechanical interactions is largely derived from 2D cell traction force microscopy, in which cells are cultured on a flat substrate. 3D cell traction microscopy is emerging for mapping traction fields of single animal cells embedded in either synthetic or natively derived fibrous gels. We discuss here the development of 3D cell traction microscopy, its current limitations, and perspectives on the future of this technology. Emphasis is placed on strategies for applying 3D cell traction microscopy to individual tumor cells migration within collagen gels. PMID:23806281

  14. [Facial erythrosis].

    PubMed

    Coget, J M; Merlen, J F

    1979-01-01

    At the borders of two sister disciplines, facial erythrosis is a fairly disabling microvascular phenomenon, since it appears most frequently in women. From several rather special cases, the authors review the aetiology and differential diagnosis of these manifestations. An attempt is made to explain the pathogenesis of these phenomena. The authors stress the absence of treatment, but base their hopes on certain dimers or tetramers of hyaluronic acid, provided for their use by Dr CURRI. PMID:545362

  15. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  16. Forensic Facial Reconstruction: The Final Frontier

    PubMed Central

    Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-01-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims’. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual’s family and associates to become easy and more definite. PMID:26501035

  17. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite. PMID:26501035

  18. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  19. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  20. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  1. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  2. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  3. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  4. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  5. An orthognathic simulation system integrating teeth, jaw and face data using 3D cephalometry.

    PubMed

    Noguchi, N; Tsuji, M; Shigematsu, M; Goto, M

    2007-07-01

    A method for simulating the movement of teeth, jaw and face caused by orthognathic surgery is proposed, characterized by the use of 3D cephalometric data for 3D simulation. Computed tomography data are not required. The teeth and facial data are obtained by a laser scanner and the data for the patient's mandible are reconstructed and integrated according to 3D cephalometry using a projection-matching technique. The mandibular form is simulated by transforming a generic model to match the patient's cephalometric data. This system permits analysis of bone movement at each individual part, while also helping in the choice of optimal osteotomy design considering the influences on facial soft-tissue form.

  6. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  7. Pediatric facial nerve rehabilitation.

    PubMed

    Banks, Caroline A; Hadlock, Tessa A

    2014-11-01

    Facial paralysis is a rare but severe condition in the pediatric population. Impaired facial movement has multiple causes and varied presentations, therefore individualized treatment plans are essential for optimal results. Advances in facial reanimation over the past 4 decades have given rise to new treatments designed to restore balance and function in pediatric patients with facial paralysis. This article provides a comprehensive review of pediatric facial rehabilitation and describes a zone-based approach to assessment and treatment of impaired facial movement.

  8. Forward ramp in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mars Pathfinder's forward rover ramp can be seen successfully unfurled in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This ramp was not used for the deployment of the microrover Sojourner, which occurred at the end of Sol 2. When this image was taken, Sojourner was still latched to one of the lander's petals, waiting for the command sequence that would execute its descent off of the lander's petal.

    The image helped Pathfinder scientists determine whether to deploy the rover using the forward or backward ramps and the nature of the first rover traverse. The metallic object at the lower left of the image is the lander's low-gain antenna. The square at the end of the ramp is one of the spacecraft's magnetic targets. Dust that accumulates on the magnetic targets will later be examined by Sojourner's Alpha Proton X-Ray Spectrometer instrument for chemical analysis. At right, a lander petal is visible.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. 3D grain boundary migration

    NASA Astrophysics Data System (ADS)

    Becker, J. K.; Bons, P. D.

    2009-04-01

    Microstructures of rocks play an important role in determining rheological properties and help to reveal the processes that lead to their formation. Some of these processes change the microstructure significantly and may thus have the opposite effect in obliterating any fabrics indicative of the previous history of the rocks. One of these processes is grain boundary migration (GBM). During static recrystallisation, GBM may produce a foam texture that completely overprints a pre-existing grain boundary network and GBM actively influences the rheology of a rock, via its influence on grain size and lattice defect concentration. We here present a new numerical simulation software that is capable of simulating a whole range of processes on the grain scale (it is not limited to grain boundary migration). The software is polyhedron-based, meaning that each grain (or phase) is represented by a polyhedron that has discrete boundaries. The boundary (the shell) of the polyhedron is defined by a set of facets which in turn is defined by a set of vertices. Each structural entity (polyhedron, facets and vertices) can have an unlimited number of parameters (depending on the process to be modeled) such as surface energy, concentration, etc. which can be used to calculate changes of the microstructre. We use the processes of grain boundary migration of a "regular" and a partially molten rock to demonstrate the software. Since this software is 3D, the formation of melt networks in a partially molten rock can also be studied. The interconnected melt network is of fundamental importance for melt segregation and migration in the crust and mantle and can help to understand the core-mantle differentiation of large terrestrial planets.

  10. Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery

    NASA Astrophysics Data System (ADS)

    Chung, Soyoung; Kim, Joojin; Hong, Helen

    2016-03-01

    During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.

  11. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  12. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  13. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  14. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  15. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  16. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  17. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  18. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  19. Facial restoration.

    PubMed

    Diner, J

    1975-07-01

    Medical science has demonstrated that fiction can be turned into fact. It is prophesied that man will be able to liver longer due to the development of synthetic organs. Sophisticated facial prostheses will be included in this progressive field. Perhaps the next century will make synthetic substitutes past history with the transplantation of organs as established practice. Or, perhaps some of the latest developments of growing skin or the use of carbonated teflon inserts will replace currently used plastics. In the meantime, we must continue to work within the limitations of our present technology. PMID:1228185

  20. Facial restoration.

    PubMed

    Diner, J

    1975-07-01

    Medical science has demonstrated that fiction can be turned into fact. It is prophesied that man will be able to liver longer due to the development of synthetic organs. Sophisticated facial prostheses will be included in this progressive field. Perhaps the next century will make synthetic substitutes past history with the transplantation of organs as established practice. Or, perhaps some of the latest developments of growing skin or the use of carbonated teflon inserts will replace currently used plastics. In the meantime, we must continue to work within the limitations of our present technology.

  1. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  2. [Surgical facial reanimation after persisting facial paralysis].

    PubMed

    Pasche, Philippe

    2011-10-01

    Facial reanimation following persistent facial paralysis can be managed with surgical procedures of varying complexity. The choice of the technique is mainly determined by the cause of facial paralysis, the age and desires of the patient. The techniques most commonly used are the nerve grafts (VII-VII, XII-VII, cross facial graft), dynamic muscle transfers (temporal myoplasty, free muscle transfert) and static suspensions. An intensive rehabilitation through specific exercises after all procedures is essential to archieve good results.

  3. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  4. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  5. Toward single cell traction microscopy within 3D collagen matrices

    SciTech Connect

    Hall, Matthew S.; Long, Rong; Feng, Xinzeng; Huang, YuLing; Hui, Chung-Yuen; Wu, Mingming

    2013-10-01

    Mechanical interaction between the cell and its extracellular matrix (ECM) regulates cellular behaviors, including proliferation, differentiation, adhesion, and migration. Cells require the three-dimensional (3D) architectural support of the ECM to perform physiologically realistic functions. However, current understanding of cell–ECM and cell–cell mechanical interactions is largely derived from 2D cell traction force microscopy, in which cells are cultured on a flat substrate. 3D cell traction microscopy is emerging for mapping traction fields of single animal cells embedded in either synthetic or natively derived fibrous gels. We discuss here the development of 3D cell traction microscopy, its current limitations, and perspectives on the future of this technology. Emphasis is placed on strategies for applying 3D cell traction microscopy to individual tumor cell migration within collagen gels. - Highlights: • Review of the current state of the art in 3D cell traction force microscopy. • Bulk and micro-characterization of remodelable fibrous collagen gels. • Strategies for performing 3D cell traction microscopy within collagen gels.

  6. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  7. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  8. Facial biometrics of peri-oral changes in Crohn's disease.

    PubMed

    Zou, L; Adegun, O K; Willis, A; Fortune, Farida

    2014-05-01

    Crohn's disease is a chronic relapsing and remitting inflammatory condition which affects any part of the gastrointestinal tract. In the oro-facial region, patients can present peri-oral swellings which results in severe facial disfigurement. To date, assessing the degree of facial changes and evaluation of treatment outcomes relies on clinical observation and semi-quantitative methods. In this paper, we describe the development of a robust and reproducible measurement strategy using 3-D facial biometrics to objectively quantify the extent and progression of oro-facial Crohn's disease. Using facial laser scanning, 32 serial images from 13 Crohn's patients attending the Oral Medicine clinic were acquired during relapse, remission, and post-treatment phases. Utilising theories of coordinate metrology, the facial images were subjected to registration, regions of interest identification, and reproducible repositioning prior to obtaining volume measurements. To quantify the changes in tissue volume, scan images from consecutive appointments were compared to the baseline (first scan image). Reproducibility test was performed to ascertain the degree of uncertainty in volume measurements. 3-D facial biometric imaging is a reliable method to identify and quantify peri-oral swelling in Crohn's patients. Comparison of facial scan images at different phases of the disease revealed precisely profile and volume changes. The volume measurements were highly reproducible as adjudged from the 1% standard deviation. 3-D facial biometrics measurements in Crohn's patients with oro-facial involvement offers a quick, robust, economical and objective approach for guided therapeutic intervention and routine assessment of treatment efficacy on the clinic.

  9. RELAP5-3D User Problems

    SciTech Connect

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  10. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  11. Identity information content depends on the type of facial movement

    PubMed Central

    Dobs, Katharina; Bülthoff, Isabelle; Schultz, Johannes

    2016-01-01

    Facial movements convey information about many social cues, including identity. However, how much information about a person’s identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion. PMID:27683087

  12. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  13. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  14. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  15. Measuring Facial Movement

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1976-01-01

    The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)

  16. 3D Camouflage in an Ornithischian Dinosaur.

    PubMed

    Vinther, Jakob; Nicholls, Robert; Lautenschlager, Stephan; Pittman, Michael; Kaye, Thomas G; Rayfield, Emily; Mayr, Gerald; Cuthill, Innes C

    2016-09-26

    Countershading was one of the first proposed mechanisms of camouflage [1, 2]. A dark dorsum and light ventrum counteract the gradient created by illumination from above, obliterating cues to 3D shape [3-6]. Because the optimal countershading varies strongly with light environment [7-9], pigmentation patterns give clues to an animal's habitat. Indeed, comparative evidence from ungulates [9] shows that interspecific variation in countershading matches predictions: in open habitats, where direct overhead sunshine dominates, a sharp dark-light color transition high up the body is evident; in closed habitats (e.g., under forest canopy), diffuse illumination dominates and a smoother dorsoventral gradation is found. We can apply this approach to extinct animals in which the preservation of fossil melanin allows reconstruction of coloration [10-15]. Here we present a study of an exceptionally well-preserved specimen of Psittacosaurus sp. from the Chinese Jehol biota [16, 17]. This Psittacosaurus was countershaded [16] with a light underbelly and tail, whereas the chest was more pigmented. Other patterns resemble disruptive camouflage, whereas the chin and jugal bosses on the face appear dark. We projected the color patterns onto an anatomically accurate life-size model in order to assess their function experimentally. The patterns are compared to the predicted optimal countershading from the measured radiance patterns generated on an identical uniform gray model in direct versus diffuse illumination. These studies suggest that Psittacosaurus sp. inhabited a closed habitat such as a forest with a relatively dense canopy. VIDEO ABSTRACT. PMID:27641767

  17. Combining 3D technologies for cultural heritage interpretation and entertainment

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Picard, Michel; El-Hakim, Sabry F.; Godin, Guy; Valzano, Virginia; Bandiera, Adriana

    2004-12-01

    This paper presents a summary of the 3D modeling work that was accomplished in preparing multimedia products for cultural heritage interpretation and entertainment. The three cases presented are the Byzantine Crypt of Santa Cristina, Apulia, temple C of Selinunte, Sicily, and a bronze sculpture from the 6th century BC found in Ugento, Apulia. The core of the approach is based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images. It is shown that three-dimensional modeling from range imaging is an effective way to present the spatial information for environments and artifacts. Spatial sampling and range measurement uncertainty considerations are addressed by giving the results of a number of tests on different range cameras. The integration of both photogrammetric and CAD modeling complements this approach. Results on a CDROM, a DVD, virtual 3D theatre, holograms, video animations and web pages have been prepared for these projects.

  18. Combining 3D technologies for cultural heritage interpretation and entertainment

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Picard, Michel; El-Hakim, Sabry F.; Godin, Guy; Valzano, Virginia; Bandiera, Adriana

    2005-01-01

    This paper presents a summary of the 3D modeling work that was accomplished in preparing multimedia products for cultural heritage interpretation and entertainment. The three cases presented are the Byzantine Crypt of Santa Cristina, Apulia, temple C of Selinunte, Sicily, and a bronze sculpture from the 6th century BC found in Ugento, Apulia. The core of the approach is based upon high-resolution photo-realistic texture mapping onto 3D models generated from range images. It is shown that three-dimensional modeling from range imaging is an effective way to present the spatial information for environments and artifacts. Spatial sampling and range measurement uncertainty considerations are addressed by giving the results of a number of tests on different range cameras. The integration of both photogrammetric and CAD modeling complements this approach. Results on a CDROM, a DVD, virtual 3D theatre, holograms, video animations and web pages have been prepared for these projects.

  19. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  20. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  1. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  2. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  3. [3D reconstructions in radiotherapy planning].

    PubMed

    Schlegel, W

    1991-10-01

    3D Reconstructions from tomographic images are used in the planning of radiation therapy to study important anatomical structures such as the body surface, target volumes, and organs at risk. The reconstructed anatomical models are used to define the geometry of the radiation beams. In addition, 3D voxel models are used for the calculation of the 3D dose distributions with an accuracy, previously impossible to achieve. Further uses of 3D reconstructions are in the display and evaluation of 3D therapy plans, and in the transfer of treatment planning parameters to the irradiation situation with the help of digitally reconstructed radiographs. 3D tomographic imaging with subsequent 3D reconstruction must be regarded as a completely new basis for the planning of radiation therapy, enabling tumor-tailored radiation therapy of localized target volumes with increased radiation doses and improved sparing of organs at risk. 3D treatment planning is currently being evaluated in clinical trials in connection with the new treatment techniques of conformation radiotherapy. Early experience with 3D treatment planning shows that its clinical importance in radiotherapy is growing, but will only become a standard radiotherapy tool when volumetric CT scanning, reliable and user-friendly treatment planning software, and faster and cheaper PACS-integrated medical work stations are accessible to radiotherapists.

  4. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427

  5. Synthesis of image sequences for Korean sign language using 3D shape model

    NASA Astrophysics Data System (ADS)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  6. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  7. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  8. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  9. Extra dimensions: 3D and time in PDF documentation

    NASA Astrophysics Data System (ADS)

    Graf, N. A.

    2011-01-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  10. Extra Dimensions: 3D and Time in PDF Documentation

    SciTech Connect

    Graf, N.A.; /SLAC

    2012-04-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  11. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  12. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  13. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

  14. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  15. 3D computer data capture and imaging applied to the face and jaws.

    PubMed

    Spencer, R; Hathaway, R; Speculand, B

    1996-02-01

    There have been few attempts in the past at 3D computer modelling of facial deformity because of the difficulties with generating accurate three-dimensional data and subsequent image regeneration and manipulation. We report the application of computer aided engineering techniques to the study of jaw deformity. The construction of a 3D image of the mandible using a Ferranti co-ordinate measuring machine for data capture and the 'DUCT5' surface modelling programme for image regeneration is described. The potential application of this work will be discussed. PMID:8645664

  16. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  17. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  18. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  19. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  20. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  1. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  2. Static & Dynamic Response of 3D Solids

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  3. 3D Printing. What's the Harm?

    ERIC Educational Resources Information Center

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  4. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  5. A 3D Geostatistical Mapping Tool

    SciTech Connect

    Weiss, W. W.; Stevenson, Graig; Patel, Ketan; Wang, Jun

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  6. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  7. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  8. Clinical applications of 3-D dosimeters

    NASA Astrophysics Data System (ADS)

    Wuu, Cheng-Shie

    2015-01-01

    Both 3-D gels and radiochromic plastic dosimeters, in conjunction with dose image readout systems (MRI or optical-CT), have been employed to measure 3-D dose distributions in many clinical applications. The 3-D dose maps obtained from these systems can provide a useful tool for clinical dose verification for complex treatment techniques such as IMRT, SRS/SBRT, brachytherapy, and proton beam therapy. These complex treatments present high dose gradient regions in the boundaries between the target and surrounding critical organs. Dose accuracy in these areas can be critical, and may affect treatment outcome. In this review, applications of 3-D gels and PRESAGE dosimeter are reviewed and evaluated in terms of their performance in providing information on clinical dose verification as well as commissioning of various treatment modalities. Future interests and clinical needs on studies of 3-D dosimetry are also discussed.

  9. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  10. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  11. Robust 3D face recognition by local shape difference boosting.

    PubMed

    Wang, Yueming; Liu, Jianzhuang; Tang, Xiaoou

    2010-10-01

    This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient. PMID:20724762

  12. Inverse rendering of faces with a 3D morphable model.

    PubMed

    Aldrian, Oswald; Smith, William A P

    2013-05-01

    In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database. PMID:23520253

  13. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  14. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  15. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  16. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  17. 3D visualization and quantification of bone and teeth mineralization for the study of osteo/dentinogenesis in mice models

    NASA Astrophysics Data System (ADS)

    Marchadier, A.; Vidal, C.; Ordureau, S.; Lédée, R.; Léger, C.; Young, M.; Goldberg, M.

    2011-03-01

    Research on bone and teeth mineralization in animal models is critical for understanding human pathologies. Genetically modified mice represent highly valuable models for the study of osteo/dentinogenesis defects and osteoporosis. Current investigations on mice dental and skeletal phenotype use destructive and time consuming methods such as histology and scanning microscopy. Micro-CT imaging is quicker and provides high resolution qualitative phenotypic description. However reliable quantification of mineralization processes in mouse bone and teeth are still lacking. We have established novel CT imaging-based software for accurate qualitative and quantitative analysis of mouse mandibular bone and molars. Data were obtained from mandibles of mice lacking the Fibromodulin gene which is involved in mineralization processes. Mandibles were imaged with a micro-CT originally devoted to industrial applications (Viscom, X8060 NDT). 3D advanced visualization was performed using the VoxBox software (UsefulProgress) with ray casting algorithms. Comparison between control and defective mice mandibles was made by applying the same transfer function for each 3D data, thus allowing to detect shape, colour and density discrepencies. The 2D images of transverse slices of mandible and teeth were similar and even more accurate than those obtained with scanning electron microscopy. Image processing of the molars allowed the 3D reconstruction of the pulp chamber, providing a unique tool for the quantitative evaluation of dentinogenesis. This new method is highly powerful for the study of oro-facial mineralizations defects in mice models, complementary and even competitive to current histological and scanning microscopy appoaches.

  18. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  19. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article.

  20. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  1. NUBEAM developments and 3d halo modeling

    NASA Astrophysics Data System (ADS)

    Gorelenkova, M. V.; Medley, S. S.; Kaye, S. M.

    2012-10-01

    Recent developments related to the 3D halo model in NUBEAM code are described. To have a reliable halo neutral source for diagnostic simulation, the TRANSP/NUBEAM code has been enhanced with full implementation of ADAS atomic physic ground state and excited state data for hydrogenic beams and mixed species plasma targets. The ADAS codes and database provide the density and temperature dependence of the atomic data, and the collective nature of the state excitation process. To be able to populate 3D halo output with sufficient statistical resolution, the capability to control the statistics of fast ion CX modeling and for thermal halo launch has been added to NUBEAM. The 3D halo neutral model is based on modification and extension of the ``beam in box'' aligned 3d Cartesian grid that includes the neutral beam itself, 3D fast neutral densities due to CX of partially slowed down fast ions in the beam halo region, 3D thermal neutral densities due to CX deposition and fast neutral recapture source. More details on the 3D halo simulation design will be presented.

  2. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. PMID:26562233

  3. Perception of detail in 3D images

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; Kaptein, Ronald

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived sharpness. However, people watching stereoscopic videos have reported that the 3D scene contained more details, compared to the 2D scene with identical spatial resolution. This is an interesting notion, that has never been tested in a systematic and quantitative way. To investigate this effect, we had people compare the amount of detail ("detailedness") in pairs of 2D and 3D images. A blur filter was applied to one of the two images, and the blur level was varied using an adaptive staircase procedure. In this way, the blur threshold for which the 2D and 3D image contained perceptually the same amount of detail could be found. Our results show that the 3D image needed to be blurred more than the 2D image. This confirms the earlier qualitative findings that 3D images contain perceptually more details than 2D images with the same spatial resolution.

  4. 3D bioprinting of tissues and organs.

    PubMed

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology. PMID:25093879

  5. Medical 3D Printing for the Radiologist

    PubMed Central

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  6. Extra Dimensions: 3D in PDF Documentation

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2012-12-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  7. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  8. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  9. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  10. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. VALIDATION OF IMPROVED 3D ATR MODEL

    SciTech Connect

    Soon Sam Kim; Bruce G. Schnitzler

    2005-11-01

    A full-core Monte Carlo based 3D model of the Advanced Test Reactor (ATR) was previously developed. [1] An improved 3D model has been developed by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) to eliminate homogeneity of fuel plates of the old model, incorporate core changes into the new model, and to validate against a newer, more complicated core configuration. This new 3D model adds capability for fuel loading design and azimuthal power peaking studies of the ATR fuel elements.

  17. Explicit 3-D Hydrodynamic FEM Program

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, includingmore » frictional sliding, single surface contact and automatic contact generation.« less

  18. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

  19. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  20. FIT3D: Fitting optical spectra

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-09-01

    FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.

  1. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  2. Investigations in massive 3D gravity

    SciTech Connect

    Accioly, Antonio; Helayeel-Neto, Jose; Morais, Jefferson; Turcati, Rodrigo; Scatena, Eslley

    2011-05-15

    Some interesting gravitational properties of the Bergshoeff-Hohm-Townsend model (massive 3D gravity), such as the presence of a short-range gravitational force in the nonrelativistic limit and the existence of an impact-parameter-dependent gravitational deflection angle, are studied. Interestingly enough, these phenomena have no counterpart in the usual Einstein 3D gravity. In order to better understand the two aforementioned gravitational properties, they are also analyzed in the framework of 3D higher-derivative gravity with the Einstein-Hilbert term with the 'wrong sign'.

  3. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  4. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  5. JAR3D Webserver: Scoring and aligning RNA loop sequences to known 3D motifs

    PubMed Central

    Roll, James; Zirbel, Craig L.; Sweeney, Blake; Petrov, Anton I.; Leontis, Neocles

    2016-01-01

    Many non-coding RNAs have been identified and may function by forming 2D and 3D structures. RNA hairpin and internal loops are often represented as unstructured on secondary structure diagrams, but RNA 3D structures show that most such loops are structured by non-Watson–Crick basepairs and base stacking. Moreover, different RNA sequences can form the same RNA 3D motif. JAR3D finds possible 3D geometries for hairpin and internal loops by matching loop sequences to motif groups from the RNA 3D Motif Atlas, by exact sequence match when possible, and by probabilistic scoring and edit distance for novel sequences. The scoring gauges the ability of the sequences to form the same pattern of interactions observed in 3D structures of the motif. The JAR3D webserver at http://rna.bgsu.edu/jar3d/ takes one or many sequences of a single loop as input, or else one or many sequences of longer RNAs with multiple loops. Each sequence is scored against all current motif groups. The output shows the ten best-matching motif groups. Users can align input sequences to each of the motif groups found by JAR3D. JAR3D will be updated with every release of the RNA 3D Motif Atlas, and so its performance is expected to improve over time. PMID:27235417

  6. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    ERIC Educational Resources Information Center

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  7. Facial Injuries and Disorders

    MedlinePlus

    Face injuries and disorders can cause pain and affect how you look. In severe cases, they can affect sight, ... your nose, cheekbone and jaw, are common facial injuries. Certain diseases also lead to facial disorders. For ...

  8. [Pre-surgical simulation of microvascular decompression for hemifacial spasm using 3D-models].

    PubMed

    Mashiko, Toshihiro; Yang, Qiang; Kaneko, Naoki; Konno, Takehiko; Yamaguchi, Takashi; Watanabe, Eiju

    2015-01-01

    We have been performing pre-surgical simulations using custom-built patient-specific 3D-models. Here we report the advantageous use of 3D-models for simulating microvascular decompression(MVD)for hemifacial spasms. Seven cases of MVD surgery were performed. Two types of 3D-printers were used to fabricate the 3D-models:one using plaster as the modeling material(Z Printer®450, 3D systems, Rock Hill, SC, USA)and the other using acrylonitrile butadiene styrene(ABS)(UP! Plus 3D printer®, Beijing Tiertime Technology, Beijing). We tested three types of models. Type 1 was a plaster model of the brainstem, cerebellum, facial nerve, and the artery compressing the root exit zone of the facial nerve. Part of the cerebellum was digitally trimmed off to observe "the compressing point" from the same angle as that used during actual surgery. Type 2 was a modified Type 1 in which part of the skull was opened digitally to mimic a craniectomy. Type 3 was a combined model in which the cerebellum and the artery of the Type 2 model were replaced by a soft retractable cerebellum and an elastic artery. The cerebellum was made from polyurethane and cast from a plaster prototype. To fabricate elastic arteries, liquid silicone was painted onto the surface of an ABS artery and the inner ABS model was dissolved away using solvent. In all cases, the 3D-models were very useful. Although each type has advantages, the Type-3 model was judged extremely useful for training junior surgeons in microsurgical approaches.

  9. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  10. How Accurate Are the Fusion of Cone-Beam CT and 3-D Stereophotographic Images?

    PubMed Central

    Jayaratne, Yasas S. N.; McGrath, Colman P. J.; Zwahlen, Roger A.

    2012-01-01

    Background Cone-beam Computed Tomography (CBCT) and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D) visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1) to evaluate the feasibility of integrating 3-D Photos and CBCT images 2) to assess degree of error that may occur during the above processes and 3) to identify facial regions that would be most appropriate for 3-D image registration. Methodology CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS) error. Principal Findings: The signed average and RMS of the distance differences between the registered surfaces were −0.018 (±0.129) mm and 0.739 (±0.239) mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. Conclusions CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning. PMID:23185372

  11. TRMM 3-D Flyby of Ingrid

    NASA Video Gallery

    This 3-D flyby of Tropical Storm Ingrid's rainfall was created from TRMM satellite data for Sept. 16. Heaviest rainfall appears in red towers over the Gulf of Mexico, while moderate rainfall stretc...

  12. 3DSEM: A 3D microscopy dataset.

    PubMed

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Holz, Jessica D; Owen, Heather A; Yu, Zeyun

    2016-03-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. PMID:26779561

  13. 3DSEM: A 3D microscopy dataset

    PubMed Central

    Tafti, Ahmad P.; Kirkpatrick, Andrew B.; Holz, Jessica D.; Owen, Heather A.; Yu, Zeyun

    2015-01-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. PMID:26779561

  14. Tropical Cyclone Jack in Satellite 3-D

    NASA Video Gallery

    This 3-D flyby from NASA's TRMM satellite of Tropical Cyclone Jack on April 21 shows that some of the thunderstorms were shown by TRMM PR were still reaching height of at least 17 km (10.5 miles). ...

  15. An Augmented Reality based 3D Catalog

    NASA Astrophysics Data System (ADS)

    Yamada, Ryo; Kishimoto, Katsumi

    This paper presents a 3D catalog system that uses Augmented Reality technology. The use of Web-based catalog systems that present products in 3D form is increasing in various fields, along with the rapid and widespread adoption of Electronic Commerce. However, 3D shapes could previously only be seen in a virtual space, and it was difficult to understand how the products would actually look in the real world. To solve this, we propose a method that combines the virtual and real worlds simply and intuitively. The method applies Augmented Reality technology, and the system developed based on the method enables users to evaluate 3D virtual products in a real environment.

  16. 3D-printed bioanalytical devices.

    PubMed

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  17. Cyclone Rusty's Landfall in 3-D

    NASA Video Gallery

    This 3-D image derived from NASA's TRMM satellite Precipitation Radar data on February 26, 2013 at 0654 UTC showed that the tops of some towering thunderstorms in Rusty's eye wall were reaching hei...

  18. Palacios field: A 3-D case history

    SciTech Connect

    McWhorter, R.; Torguson, B.

    1994-12-31

    In late 1992, Mitchell Energy Corporation acquired a 7.75 sq mi (20.0 km{sup 2}) 3-D seismic survey over Palacios field. Matagorda County, Texas. The company shot the survey to help evaluate the field for further development by delineating the fault pattern of the producing Middle Oligocene Frio interval. They compare the mapping of the field before and after the 3-D survey. This comparison shows that the 3-D volume yields superior fault imaging and interpretability compared to the dense 2-D data set. The problems with the 2-D data set are improper imaging of small and oblique faults and insufficient coverage over a complex fault pattern. Whereas the 2-D data set validated a simple fault model, the 3-D volume revealed a more complex history of faulting that includes three different fault systems. This discovery enabled them to reconstruct the depositional and structural history of Palacios field.

  19. 3D-printed bioanalytical devices

    NASA Astrophysics Data System (ADS)

    Bishop, Gregory W.; Satterwhite-Warden, Jennifer E.; Kadimisetty, Karteek; Rusling, James F.

    2016-07-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  20. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  1. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  2. 3D Printing for Tissue Engineering

    PubMed Central

    Jia, Jia; Yao, Hai; Mei, Ying

    2016-01-01

    Tissue engineering aims to fabricate functional tissue for applications in regenerative medicine and drug testing. More recently, 3D printing has shown great promise in tissue fabrication with a structural control from micro- to macro-scale by using a layer-by-layer approach. Whether through scaffold-based or scaffold-free approaches, the standard for 3D printed tissue engineering constructs is to provide a biomimetic structural environment that facilitates tissue formation and promotes host tissue integration (e.g., cellular infiltration, vascularization, and active remodeling). This review will cover several approaches that have advanced the field of 3D printing through novel fabrication methods of tissue engineering constructs. It will also discuss the applications of synthetic and natural materials for 3D printing facilitated tissue fabrication. PMID:26869728

  3. 3DSEM: A 3D microscopy dataset.

    PubMed

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Holz, Jessica D; Owen, Heather A; Yu, Zeyun

    2016-03-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples.

  4. 3D-printed bioanalytical devices.

    PubMed

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  5. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  6. 3-D Flyover Visualization of Veil Nebula

    NASA Video Gallery

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  7. Future Engineers 3-D Print Timelapse

    NASA Video Gallery

    NASA Challenges K-12 students to create a model of a container for space using 3-D modeling software. Astronauts need containers of all kinds - from advanced containers that can study fruit flies t...

  8. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  9. Event-related alpha suppression in response to facial motion.

    PubMed

    Girges, Christine; Wright, Michael J; Spencer, Janine V; O'Brien, Justin M D

    2014-01-01

    While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors.

  10. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    PubMed

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed. PMID:26316289

  11. Facial paralysis in children.

    PubMed

    Reddy, Sashank; Redett, Richard

    2015-04-01

    Facial paralysis can have devastating physical and psychosocial consequences. These are particularly severe in children in whom loss of emotional expressiveness can impair social development and integration. The etiologies of facial paralysis, prospects for spontaneous recovery, and functions requiring restoration differ in children as compared with adults. Here we review contemporary management of facial paralysis with a focus on special considerations for pediatric patients.

  12. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  13. Motif3D: Relating protein sequence motifs to 3D structure.

    PubMed

    Gaulton, Anna; Attwood, Teresa K

    2003-07-01

    Motif3D is a web-based protein structure viewer designed to allow sequence motifs, and in particular those contained in the fingerprints of the PRINTS database, to be visualised on three-dimensional (3D) structures. Additional functionality is provided for the rhodopsin-like G protein-coupled receptors, enabling fingerprint motifs of any of the receptors in this family to be mapped onto the single structure available, that of bovine rhodopsin. Motif3D can be used via the web interface available at: http://www.bioinf.man.ac.uk/dbbrowser/motif3d/motif3d.html.

  14. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  15. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  16. Extra Dimensions: 3D and Time in PDF Documentation

    SciTech Connect

    Graf, Norman A.; /SLAC

    2011-11-10

    High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.

  17. Applied 3D printing for microscopy in health science research

    NASA Astrophysics Data System (ADS)

    Brideau, Craig; Zareinia, Kourosh; Stys, Peter

    2015-03-01

    The rapid prototyping capability offered by 3D printing is considered advantageous for commercial applications. However, the ability to quickly produce precision custom devices is highly beneficial in the research laboratory setting as well. Biological laboratories require the manipulation and analysis of delicate living samples, thus the ability to create custom holders, support equipment, and adapters allow the extension of existing laboratory machines. Applications include camera adapters and stage sample holders for microscopes, surgical guides for tissue preparation, and small precision tools customized to unique specifications. Where high precision is needed, especially the reproduction of fine features, a printer with a high resolution is needed. However, the introduction of cheaper, lower resolution commercial printers have been shown to be more than adequate for less demanding projects. For direct manipulation of delicate samples, biocompatible raw materials are often required, complicating the printing process. This paper will examine some examples of 3D-printed objects for laboratory use, and provide an overview of the requirements for 3D printing for this application. Materials, printing resolution, production, and ease of use will all be reviewed with an eye to producing better printers and techniques for laboratory applications. Specific case studies will highlight applications for 3D-printed devices in live animal imaging for both microscopy and Magnetic Resonance Imaging.

  18. 3D design tools improve efficiency and accuracy of a Hanford site nuclear waste storage project

    SciTech Connect

    NIELSEN, B.L.

    2003-03-23

    The complex effort of cleaning up the Hanford K Basins is separated into several individual projects. Fluor Hanford and Fluor Federal Services modeled key elements using a 3D parametric modeling program for mechanical design with training animations.

  19. A 3-D Rainfall Flyby of Tropical Storm Danielle Over Mexico

    NASA Video Gallery

    This flyby animation of 3-D data from NASA/JAXA's Global Precipitation Measurement mission or GPM satellite looks at the rainfall occurring in fading Tropical Storm Danielle over Mexico. On June 21...

  20. Comparative analysis of the course of the facial and transverse facial arteries in selected ruminant species.

    PubMed

    Zdun, Maciej; Frąckowiak, Hieronim; Kowalczyk, Karolina; Maryniak, Hieronim; Kiełtyka-Kurc, Agata

    2014-05-01

    The paper describes variations in patterns of origin from the main arteries as well as of branching and course demonstrated on the basis of selected facial arteries in several species of ruminants. The studies included 35 species of 27 genera, 9 subfamilies of animals belonging to families of Bovidae, Cervidae, Giraffidae and Moschidae from the suborder of Ruminantia, including species maintained by humans. Altogether, 435 preparations of head arteries were studied. Arteries of the examined animals were filled with acetone-dissolved stained vinyl superchloride or stained latex LBS3060. The facial artery was found to originate from the main arteries of the head in three different manners. In species devoid of facial arteries, the presence of a pronounced transverse facial artery could be demonstrated. Division of the animals into large and small ruminants, generally accepted by authors of animal anatomy textbooks, was found to be oversimplified and lacking in universal character, as to the patterns of origin and course of the facial artery and the transverse facial artery.

  1. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  2. CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.

    2013-01-01

    Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.

  3. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    SciTech Connect

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-02-15

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation.

  4. Facial image identification using Photomodeler.

    PubMed

    Lynnerup, Niels; Andersen, Marie; Lauritsen, Helle Petri

    2003-09-01

    We present the results of a preliminary study on the use of 3-D software (Photomodeler) for identification purposes. Perpetrators may be photographed or filmed by surveillance systems. The police may wish to have these images compared to photographs of suspects. The surveillance imagery will often consist of many images of the same person taken from different angles. We wanted to see if it was possible to combine such a suite of images in useful 3-D renderings of facial proportions.Fifteen male adults were photographed from four different angles. Based on these photographs, a 3-D wireframe model was produced by Photomodeler. The wireframe models were then rotated to full lateral and frontal views, and compared to like sets of photographs of the subjects. In blind trials, 9/15 of the wireframe models were assigned to the correct sets of photographs. In five/15 cases, the wireframe models were assigned to several sets, including the correct set. Only in one case was a wireframe model not assigned to a correct set of photographs at all.

  5. Self assembled structures for 3D integration

    NASA Astrophysics Data System (ADS)

    Rao, Madhav

    Three dimensional (3D) micro-scale structures attached to a silicon substrate have various applications in microelectronics. However, formation of 3D structures using conventional micro-fabrication techniques are not efficient and require precise control of processing parameters. Self assembly is a method for creating 3D structures that takes advantage of surface area minimization phenomena. Solder based self assembly (SBSA), the subject of this dissertation, uses solder as a facilitator in the formation of 3D structures from 2D patterns. Etching a sacrificial layer underneath a portion of the 2D pattern allows the solder reflow step to pull those areas out of the substrate plane resulting in a folded 3D structure. Initial studies using the SBSA method demonstrated low yields in the formation of five different polyhedra. The failures in folding were primarily attributed to nonuniform solder deposition on the underlying metal pads. The dip soldering method was analyzed and subsequently refined. A modified dip soldering process provided improved yield among the polyhedra. Solder bridging referred as joining of solder deposited on different metal patterns in an entity influenced the folding mechanism. In general, design parameters such as small gap-spacings and thick metal pads were found to favor solder bridging for all patterns studied. Two types of soldering: face and edge soldering were analyzed. Face soldering refers to the application of solder on the entire metal face. Edge soldering indicates application of solder only on the edges of the metal face. Mechanical grinding showed that face soldered SBSA structures were void free and robust in nature. In addition, the face soldered 3D structures provide a consistent heat resistant solder standoff height that serve as attachments in the integration of dissimilar electronic technologies. Face soldered 3D structures were developed on the underlying conducting channel to determine the thermo-electric reliability of

  6. PLOT3D Export Tool for Tecplot

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  7. A microfluidic device for 2D to 3D and 3D to 3D cell navigation

    NASA Astrophysics Data System (ADS)

    Shamloo, Amir; Amirifar, Leyla

    2016-01-01

    Microfluidic devices have received wide attention and shown great potential in the field of tissue engineering and regenerative medicine. Investigating cell response to various stimulations is much more accurate and comprehensive with the aid of microfluidic devices. In this study, we introduced a microfluidic device by which the matrix density as a mechanical property and the concentration profile of a biochemical factor as a chemical property could be altered. Our microfluidic device has a cell tank and a cell culture chamber to mimic both 2D to 3D and 3D to 3D migration of three types of cells. Fluid shear stress is negligible on the cells and a stable concentration gradient can be obtained by diffusion. The device was designed by a numerical simulation so that the uniformity of the concentration gradients throughout the cell culture chamber was obtained. Adult neural cells were cultured within this device and they showed different branching and axonal navigation phenotypes within varying nerve growth factor (NGF) concentration profiles. Neural stem cells were also cultured within varying collagen matrix densities while exposed to NGF concentrations and they experienced 3D to 3D collective migration. By generating vascular endothelial growth factor concentration gradients, adult human dermal microvascular endothelial cells also migrated in a 2D to 3D manner and formed a stable lumen within a specific collagen matrix density. It was observed that a minimum absolute concentration and concentration gradient were required to stimulate migration of all types of the cells. This device has the advantage of changing multiple parameters simultaneously and is expected to have wide applicability in cell studies.

  8. Modified method of analysis for surgical correction of facial asymmetry

    PubMed Central

    Christou, Terpsithea; Kau, Chung How; Waite, Peter D.; Kheir, Nadia Abou; Mouritsen, David

    2013-01-01

    Introduction: The aim of this article was to present a new method of analysis using a three dimensional (3D) model of an actual patient with facial asymmetry, for the assessment of her facial changes and the quantification of the deformity. This patient underwent orthodontic and surgical treatment to correct a severe facial asymmetry. Materials and Methods: The surgical procedure was complex and the case was challenging. The treatment procedure required an orthodontic approach followed by Le Fort I osteotomy, bilateral sagittal split osteotomy, septorhinoplasty and chin advancement. The imaging devices used in this paper is the 3dMDface system (Atlanta, GA) and the Kodak 9500 Cone Beam 3D system device (Atlanta, GA). 3D digital stereophotogrammetric cameras were used for image acquisition and a reverse modeling software package, the Rapidform 2006 Software (INUS Technology, Seoul, Korea) was applied for surface registration. The images were also combined and analyzed using the 3dMD vultus (Atlanta, GA) software and InVivoDental 5.2.3 (San Jose, CA). All data gathered from previously mentioned sources were adjusted to the patient's natural head position. Results: The 3D images of the patient were taken and analyzed in three time frames; before orthodontics and surgical treatment (T1), at the end of orthodontic therapy and before surgery (T2) and about 2 months after surgery (T3). The patient showed significant improvement of her skeletal discrepancy between T1 and T3. In addition, there were some dentoalveolar changes between T1 and T2 as expected. The 3D analysis of surgical changes on the 3D models correlated very well to the actual surgical movements. Conclusions: The use of these 3D imaging tools offer a reliable accuracy to accessing and quantifying changes that occur after surgery. This study shows supportive evidence for the use of 3D imaging techniques. PMID:24205481

  9. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  10. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  11. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  12. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  13. Robust bioengineered 3D functional human intestinal epithelium

    PubMed Central

    Chen, Ying; Lin, Yinan; Davis, Kimberly M.; Wang, Qianrui; Rnjak-Kovacina, Jelena; Li, Chunmei; Isberg, Ralph R.; Kumamoto, Carol A.; Mecsas, Joan; Kaplan, David L.

    2015-01-01

    Intestinal functions are central to human physiology, health and disease. Options to study these functions with direct relevance to the human condition remain severely limited when using conventional cell cultures, microfluidic systems, organoids, animal surrogates or human studies. To replicate in vitro the tissue architecture and microenvironments of native intestine, we developed a 3D porous protein scaffolding system, containing a geometrically-engineered hollow lumen, with adaptability to both large and small intestines. These intestinal tissues demonstrated representative human responses by permitting continuous accumulation of mucous secretions on the epithelial surface, establishing low oxygen tension in the lumen, and interacting with gut-colonizing bacteria. The newly developed 3D intestine model enabled months-long sustained access to these intestinal functions in vitro, readily integrable with a multitude of different organ mimics and will therefore ensure a reliable ex vivo tissue system for studies in a broad context of human intestinal diseases and treatments. PMID:26374193

  14. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  15. 3D Gel Map of Arabidopsis Complex I

    PubMed Central

    Peters, Katrin; Belt, Katharina; Braun, Hans-Peter

    2013-01-01

    Complex I has a unique structure in plants and includes extra subunits. Here, we present a novel study to define its protein constituents. Mitochondria were isolated from Arabidopsis thaliana cell cultures, leaves, and roots. Subunits of complex I were resolved by 3D blue-native (BN)/SDS/SDS-PAGE and identified by mass spectrometry. Overall, 55 distinct proteins were found, seven of which occur in pairs of isoforms. We present evidence that Arabidopsis complex I consists of 49 distinct types of subunits, 40 of which represent homologs of bovine complex I. The nine other subunits represent special proteins absent in the animal linage of eukaryotes, most prominently a group of subunits related to bacterial gamma-type carbonic anhydrases. A GelMap http://www.gelmap.de/arabidopsis-3d-complex-i/ is presented for promoting future complex I research in Arabidopsis thaliana. PMID:23761796

  16. Visualization and Analysis of 3D Gene Expression Data

    SciTech Connect

    Bethel, E. Wes; Rubel, Oliver; Weber, Gunther H.; Hamann, Bernd; Hagen, Hans

    2007-10-25

    Recent methods for extracting precise measurements ofspatial gene expression patterns from three-dimensional (3D) image dataopens the way for new analysis of the complex gene regulatory networkscontrolling animal development. To support analysis of this novel andhighly complex data we developed PointCloudXplore (PCX), an integratedvisualization framework that supports dedicated multi-modal, physical andinformation visualization views along with algorithms to aid in analyzingthe relationships between gene expression levels. Using PCX, we helpedour science stakeholders to address many questions in 3D gene expressionresearch, e.g., to objectively define spatial pattern boundaries andtemporal profiles of genes and to analyze how mRNA patterns arecontrolled by their regulatory transcription factors.

  17. T-HEMP3D user manual

    SciTech Connect

    Turner, D.

    1983-08-01

    The T-HEMP3D (Transportable HEMP3D) computer program is a derivative of the STEALTH three-dimensional thermodynamics code developed by Science Applications, Inc., under the direction of Ron Hofmann. STEALTH, in turn, is based entirely on the original HEMP3D code written at Lawrence Livermore National Laboratory. The primary advantage STEALTH has over its predecessors is that it was designed using modern structured design techniques, with rigorous programming standards enforced. This yields two benefits. First, the code is easily changeable; this is a necessity for a physics code used for research. The second benefit is that the code is easily transportable between different types of computers. The STEALTH program was transferred to LLNL under a cooperative development agreement. Changes were made primarily in three areas: material specification, coordinate generation, and the addition of sliding surface boundary conditions. The code was renamed T-HEMP3D to avoid confusion with other versions of STEALTH. This document summarizes the input to T-HEMP3D, as used at LLNL. It does not describe the physics simulated by the program, nor the numerical techniques employed. Furthermore, it does not describe the separate job steps of coordinate generation and post-processing, including graphical display of results. (WHK)

  18. The importance of 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Low, Daniel

    2015-01-01

    Radiation therapy has been getting progressively more complex for the past 20 years. Early radiation therapy techniques needed only basic dosimetry equipment; motorized water phantoms, ionization chambers, and basic radiographic film techniques. As intensity modulated radiation therapy and image guided therapy came into widespread practice, medical physicists were challenged with developing effective and efficient dose measurement techniques. The complex 3-dimensional (3D) nature of the dose distributions that were being delivered demanded the development of more quantitative and more thorough methods for dose measurement. The quality assurance vendors developed a wide array of multidetector arrays that have been enormously useful for measuring and characterizing dose distributions, and these have been made especially useful with the advent of 3D dose calculation systems based on the array measurements, as well as measurements made using film and portal imagers. Other vendors have been providing 3D calculations based on data from the linear accelerator or the record and verify system, providing thorough evaluation of the dose but lacking quality assurance (QA) of the dose delivery process, including machine calibration. The current state of 3D dosimetry is one of a state of flux. The vendors and professional associations are trying to determine the optimal balance between thorough QA, labor efficiency, and quantitation. This balance will take some time to reach, but a necessary component will be the 3D measurement and independent calculation of delivered radiation therapy dose distributions.

  19. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  20. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  1. 3D bioprinting for engineering complex tissues.

    PubMed

    Mandrycky, Christian; Wang, Zongjie; Kim, Keekyoung; Kim, Deok-Ho

    2016-01-01

    Bioprinting is a 3D fabrication technology used to precisely dispense cell-laden biomaterials for the construction of complex 3D functional living tissues or artificial organs. While still in its early stages, bioprinting strategies have demonstrated their potential use in regenerative medicine to generate a variety of transplantable tissues, including skin, cartilage, and bone. However, current bioprinting approaches still have technical challenges in terms of high-resolution cell deposition, controlled cell distributions, vascularization, and innervation within complex 3D tissues. While no one-size-fits-all approach to bioprinting has emerged, it remains an on-demand, versatile fabrication technique that may address the growing organ shortage as well as provide a high-throughput method for cell patterning at the micrometer scale for broad biomedical engineering applications. In this review, we introduce the basic principles, materials, integration strategies and applications of bioprinting. We also discuss the recent developments, current challenges and future prospects of 3D bioprinting for engineering complex tissues. Combined with recent advances in human pluripotent stem cell technologies, 3D-bioprinted tissue models could serve as an enabling platform for high-throughput predictive drug screening and more effective regenerative therapies.

  2. Shim3d Helmholtz Solution Package

    2009-01-29

    This suite of codes solves the Helmholtz Equation for the steady-state propagation of single-frequency electromagnetic radiation in an arbitrary 2D or 3D dielectric medium. Materials can be either transparent or absorptive (including metals) and are described entirely by their shape and complex dielectric constant. Dielectric boundaries are assumed to always fall on grid boundaries and the material within a single grid cell is considered to be uniform. Input to the problem is in the formmore » of a Dirichlet boundary condition on a single boundary, and may be either analytic (Gaussian) in shape, or a mode shape computed using a separate code (such as the included eigenmode solver vwave20), and written to a file. Solution is via the finite difference method using Jacobi iteration for 3D problems or direct matrix inversion for 2D problems. Note that 3D problems that include metals will require different iteration parameters than described in the above reference. For structures with curved boundaries not easily modeled on a rectangular grid, the auxillary codes helmholtz11(2D), helm3d (semivectoral), and helmv3d (full vectoral) are provided. For these codes the finite difference equations are specified on a topological regular triangular grid and solved using Jacobi iteration or direct matrix inversion as before. An automatic grid generator is supplied.« less

  3. Full-color holographic 3D printer

    NASA Astrophysics Data System (ADS)

    Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio

    2003-05-01

    A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.

  4. DYNA3D Code Practices and Developments

    SciTech Connect

    Lin, L.; Zywicz, E.; Raboin, P.

    2000-04-21

    DYNA3D is an explicit, finite element code developed to solve high rate dynamic simulations for problems of interest to the engineering mechanics community. The DYNA3D code has been under continuous development since 1976[1] by the Methods Development Group in the Mechanical Engineering Department of Lawrence Livermore National Laboratory. The pace of code development activities has substantially increased in the past five years, growing from one to between four and six code developers. This has necessitated the use of software tools such as CVS (Concurrent Versions System) to help manage multiple version updates. While on-line documentation with an Adobe PDF manual helps to communicate software developments, periodically a summary document describing recent changes and improvements in DYNA3D software is needed. The first part of this report describes issues surrounding software versions and source control. The remainder of this report details the major capability improvements since the last publicly released version of DYNA3D in 1996. Not included here are the many hundreds of bug corrections and minor enhancements, nor the development in DYNA3D between the manual release in 1993[2] and the public code release in 1996.

  5. BEAMS3D Neutral Beam Injection Model

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Lazerson, Samuel A.

    2014-09-01

    With the advent of applied 3D fields in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous slowing down, and pitch angle scattering are modeled with the ADAS atomic physics database. Elementary benchmark calculations are presented to verify the collisionless particle orbits, NBI model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields. Notice: this manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  6. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  7. Magnetic Properties of 3D Printed Toroids

    NASA Astrophysics Data System (ADS)

    Bollig, Lindsey; Otto, Austin; Hilpisch, Peter; Mowry, Greg; Nelson-Cheeseman, Brittany; Renewable Energy; Alternatives Lab (REAL) Team

    Transformers are ubiquitous in electronics today. Although toroidal geometries perform most efficiently, transformers are traditionally made with rectangular cross-sections due to the lower manufacturing costs. Additive manufacturing techniques (3D printing) can easily achieve toroidal geometries by building up a part through a series of 2D layers. To get strong magnetic properties in a 3D printed transformer, a composite filament is used containing Fe dispersed in a polymer matrix. How the resulting 3D printed toroid responds to a magnetic field depends on two structural factors of the printed 2D layers: fill factor (planar density) and fill pattern. In this work, we investigate how the fill factor and fill pattern affect the magnetic properties of 3D printed toroids. The magnetic properties of the printed toroids are measured by a custom circuit that produces a hysteresis loop for each toroid. Toroids with various fill factors and fill patterns are compared to determine how these two factors can affect the magnetic field the toroid can produce. These 3D printed toroids can be used for numerous applications in order to increase the efficiency of transformers by making it possible for manufacturers to make a toroidal geometry.

  8. 3D bioprinting for engineering complex tissues.

    PubMed

    Mandrycky, Christian; Wang, Zongjie; Kim, Keekyoung; Kim, Deok-Ho

    2016-01-01

    Bioprinting is a 3D fabrication technology used to precisely dispense cell-laden biomaterials for the construction of complex 3D functional living tissues or artificial organs. While still in its early stages, bioprinting strategies have demonstrated their potential use in regenerative medicine to generate a variety of transplantable tissues, including skin, cartilage, and bone. However, current bioprinting approaches still have technical challenges in terms of high-resolution cell deposition, controlled cell distributions, vascularization, and innervation within complex 3D tissues. While no one-size-fits-all approach to bioprinting has emerged, it remains an on-demand, versatile fabrication technique that may address the growing organ shortage as well as provide a high-throughput method for cell patterning at the micrometer scale for broad biomedical engineering applications. In this review, we introduce the basic principles, materials, integration strategies and applications of bioprinting. We also discuss the recent developments, current challenges and future prospects of 3D bioprinting for engineering complex tissues. Combined with recent advances in human pluripotent stem cell technologies, 3D-bioprinted tissue models could serve as an enabling platform for high-throughput predictive drug screening and more effective regenerative therapies. PMID:26724184

  9. 3D culture for cardiac cells.

    PubMed

    Zuppinger, Christian

    2016-07-01

    This review discusses historical milestones, recent developments and challenges in the area of 3D culture models with cardiovascular cell types. Expectations in this area have been raised in recent years, but more relevant in vitro research, more accurate drug testing results, reliable disease models and insights leading to bioartificial organs are expected from the transition to 3D cell culture. However, the construction of organ-like cardiac 3D models currently remains a difficult challenge. The heart consists of highly differentiated cells in an intricate arrangement.Furthermore, electrical “wiring”, a vascular system and multiple cell types act in concert to respond to the rapidly changing demands of the body. Although cardiovascular 3D culture models have been predominantly developed for regenerative medicine in the past, their use in drug screening and for disease models has become more popular recently. Many sophisticated 3D culture models are currently being developed in this dynamic area of life science. This article is part of a Special Issue entitled: Cardiomyocyte Biology: Integration of Developmental and Environmental Cues in the Heart edited by Marcus Schaub and Hughes Abriel.

  10. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  11. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  12. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  13. 3D in vitro technology for drug discovery.

    PubMed

    Hosseinkhani, Hossein

    2012-02-01

    Three-dimensional (3D) in vitro systems that can mimic organ and tissue structure and function in vivo, will be of great benefit for a variety of biological applications from basic biology to toxicity testing and drug discovery. There have been several attempts to generate 3D tissue models but most of these models require costly equipment, and the most serious disadvantage in them is that they are too far from the mature human organs in vivo. Because of these problems, research and development in drug discovery, toxicity testing and biotech industries are highly expensive, and involve sacrifice of countless animals and it takes several years to bring a single drug/product to the market or to find the toxicity or otherwise of chemical entities. Our group has been actively working on several alternative models by merging biomaterials science, nanotechnology and biological principles to generate 3D in vitro living organs, to be called "Human Organs-on-Chip", to mimic natural organ/tissues, in order to reduce animal testing and clinical trials. We have fabricated a novel type of mechanically and biologically bio-mimicking collagen-based hydrogel that would provide for interconnected mini-wells in which 3D cell/organ culture of human samples in a manner similar to human organs with extracellular matrix (ECM) molecules would be possible. These products mimic the physical, chemical, and biological properties of natural organs and tissues at different scales. This paper will review the outcome of our several experiments so far in this direction and the future perspectives.

  14. Molecular control of facial morphology

    PubMed Central

    Liu, B.; Rooker, S.M.; Helms, J.A.

    2010-01-01

    We present a developmental perspective on the concept of phylotypic and phenotypic stages of craniofacial development. Within Orders of avians and mammals, a phylotypic period exists when the morphology of the facial prominences is minimally divergent. We postulate that species-specific facial variations arise as a result of subtle shifts in the timing and the duration of molecular pathway activity (e.g., heterochrony), and present evidence demonstrating a critical role for Wnt and FGF signaling in this process. The same molecular pathways that shape the vertebrate face are also implicated in craniofacial deformities, indicating that comparisons between and among animal species may represent a novel method for the identification of human craniofacial disease genes. PMID:19747977

  15. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  16. 3D differential phase contrast microscopy

    NASA Astrophysics Data System (ADS)

    Chen, Michael; Tian, Lei; Waller, Laura

    2016-03-01

    We demonstrate three-dimensional (3D) optical phase and amplitude reconstruction based on coded source illumination using a programmable LED array. Multiple stacks of images along the optical axis are computed from recorded intensities captured by multiple images under off-axis illumination. Based on the first Born approximation, a linear differential phase contrast (DPC) model is built between 3D complex index of refraction and the intensity stacks. Therefore, 3D volume reconstruction can be achieved via a fast inversion method, without the intermediate 2D phase retrieval step. Our system employs spatially partially coherent illumination, so the transverse resolution achieves twice the NA of coherent systems, while axial resolution is also improved 2× as compared to holographic imaging.

  17. The CIFIST 3D model atmosphere grid.

    NASA Astrophysics Data System (ADS)

    Ludwig, H.-G.; Caffau, E.; Steffen, M.; Freytag, B.; Bonifacio, P.; Kučinskas, A.

    Grids of stellar atmosphere models and associated synthetic spectra are numerical products which have a large impact in astronomy due to their ubiquitous application in the interpretation of radiation from individual stars and stellar populations. 3D model atmospheres are now on the verge of becoming generally available for a wide range of stellar atmospheric parameters. We report on efforts to develop a grid of 3D model atmospheres for late-type stars within the CIFIST Team at Paris Observatory. The substantial demands in computational and human labor for the model production and post-processing render this apparently mundane task a challenging logistic exercise. At the moment the CIFIST grid comprises 77 3D model atmospheres with emphasis on dwarfs of solar and sub-solar metallicities. While the model production is still ongoing, first applications are already worked upon by the CIFIST Team and collaborators.

  18. 3D Printed Multimaterial Microfluidic Valve.

    PubMed

    Keating, Steven J; Gariboldi, Maria Isabella; Patrick, William G; Sharma, Sunanda; Kong, David S; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics.

  19. Simnple, portable, 3-D projection routine

    SciTech Connect

    Wagner, J.S.

    1987-04-01

    A 3-D projection routine is presented for use in computer graphics applications. The routine is simple enough to be considered portable, and easily modified for special problems. There is often the need to draw three-dimensional objects on a two-dimensional plotting surface. For the object to appear realistic, perspective effects must be included that allow near objects to appear larger than distant objects. Several 3-D projection routines are commercially available, but they are proprietary, not portable, and not easily changed by the user. Most are restricted to surfaces that are functions of two variables. This makes them unsuitable for viewing physical objects such as accelerator prototypes or propagating beams. This report develops a very simple algorithm for 3-D projections; the core routine is only 39 FORTRAN lines long. It can be easily modified for special problems. Software dependent calls are confined to simple drivers that can be exchanged when different plotting software packages are used.

  20. Ames Lab 101: 3D Metals Printer

    SciTech Connect

    Ott, Ryan

    2014-02-13

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  1. 3D Printed Multimaterial Microfluidic Valve

    PubMed Central

    Patrick, William G.; Sharma, Sunanda; Kong, David S.; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  2. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  3. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-20

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices.

  4. 3D Printed Multimaterial Microfluidic Valve.

    PubMed

    Keating, Steven J; Gariboldi, Maria Isabella; Patrick, William G; Sharma, Sunanda; Kong, David S; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  5. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  6. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  7. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  8. 3-D Finite Element Heat Transfer

    1992-02-01

    TOPAZ3D is a three-dimensional implicit finite element computer code for heat transfer analysis. TOPAZ3D can be used to solve for the steady-state or transient temperature field on three-dimensional geometries. Material properties may be temperature-dependent and either isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functionalmore » representation of boundary conditions and internal heat generation. TOPAZ3D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.« less

  9. Ames Lab 101: 3D Metals Printer

    ScienceCinema

    Ott, Ryan

    2016-07-12

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  10. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  11. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  12. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  13. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  14. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  15. Spatial watermarking of 3D triangle meshes

    NASA Astrophysics Data System (ADS)

    Cayre, Francois; Macq, Benoit M. M.

    2001-12-01

    Although it is obvious that watermarking has become of great interest in protecting audio, videos, and still pictures, few work has been done considering 3D meshes. We propose a new method for watermarking 3D triangle meshes. This method embeds the watermark as triangles deformations. The list of watermarked triangles is obtained through a similar way to the one used in the TSPS (Triangle Strip Peeling Sequence) method. Unlike TSPS, our method is automatic and more secure. We also show that it is reversible.

  16. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  17. Superplastic forming using NIKE3D

    SciTech Connect

    Puso, M.

    1996-12-04

    The superplastic forming process requires careful control of strain rates in order to avoid strain localizations. A load scheduler was developed and implemented into the nonlinear finite element code NIKE3D to provide strain rate control during forming simulation and process schedule output. Often the sheets being formed in SPF are very thin such that less expensive membrane elements can be used as opposed to shell elements. A large strain membrane element was implemented into NIKE3D to assist in SPF process modeling.

  18. The Galicia 3D experiment: an Introduction.

    NASA Astrophysics Data System (ADS)

    Reston, Timothy; Martinez Loriente, Sara; Holroyd, Luke; Merry, Tobias; Sawyer, Dale; Morgan, Julia; Jordan, Brian; Tesi Sanjurjo, Mari; Alexanian, Ara; Shillington, Donna; Gibson, James; Minshull, Tim; Karplus, Marianne; Bayracki, Gaye; Davy, Richard; Klaeschen, Dirk; Papenberg, Cord; Ranero, Cesar; Perez-Gussinye, Marta; Martinez, Miguel

    2014-05-01

    In June and July 2013, scientists from 8 institutions took part in the Galicia 3D seismic experiment, the first ever crustal -scale academic 3D MCS survey over a rifted margin. The aim was to determine the 3D structure of a critical portion of the west Galicia rifted margin. At this margin, well-defined tilted fault blocks, bound by west-dipping faults and capped by synrift sediments are underlain by a bright reflection, undulating on time sections, termed the S reflector and thought to represent a major detachment fault of some kind. Moving west, the crust thins to zero thickness and mantle is unroofed, as evidence by the "Peridotite Ridge" first reported at this margin, but since observed at many other magma-poor margins. By imaging such a margin in detail, the experiment aimed to resolve the processes controlling crustal thinning and mantle unroofing at a type example magma poor margin. The experiment set out to collect several key datasets: a 3D seismic reflection volume measuring ~20x64km and extending down to ~14s TWT, a 3D ocean bottom seismometer dataset suitable for full wavefield inversion (the recording of the complete 3D seismic shots by 70 ocean bottom instruments), the "mirror imaging" of the crust using the same grid of OBS, a single 2D combined reflection/refraction profile extending to the west to determine the transition from unroofed mantle to true oceanic crust, and the seismic imaging of the water column, calibrated by regular deployment of XBTs to measure the temperature structure of the water column. We collected 1280 km2 of seismic reflection data, consisting of 136533 shots recorded on 1920 channels, producing 260 million seismic traces, each ~ 14s long. This adds up to ~ 8 terabytes of data, representing, we believe, the largest ever academic 3D MCS survey in terms of both the area covered and the volume of data. The OBS deployment was the largest ever within an academic 3D survey.

  19. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  20. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335