Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
TLS for generating multi-LOD of 3D building model
NASA Astrophysics Data System (ADS)
Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.
2014-02-01
The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
NASA Technical Reports Server (NTRS)
Raju, I. S.
1992-01-01
A computer program that generates three-dimensional (3D) finite element models for cracked 3D solids was written. This computer program, gensurf, uses minimal input data to generate 3D finite element models for isotropic solids with elliptic or part-elliptic cracks. These models can be used with a 3D finite element program called surf3d. This report documents this mesh generator. In this manual the capabilities, limitations, and organization of gensurf are described. The procedures used to develop 3D finite element models and the input for and the output of gensurf are explained. Several examples are included to illustrate the use of this program. Several input data files are included with this manual so that the users can edit these files to conform to their crack configuration and use them with gensurf.
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.
2002-04-01
An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.
High-Fidelity Roadway Modeling and Simulation
NASA Technical Reports Server (NTRS)
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds
NASA Astrophysics Data System (ADS)
Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu
2016-06-01
3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
Generation of 3D templates of active sites of proteins with rigid prosthetic groups.
Nebel, Jean-Christophe
2006-05-15
With the increasing availability of protein structures, the generation of biologically meaningful 3D patterns from the simultaneous alignment of several protein structures is an exciting prospect: active sites could be better understood, protein functions and protein 3D structures could be predicted more accurately. Although patterns can already be generated at the fold and topological levels, no system produces high-resolution 3D patterns including atom and cavity positions. To address this challenge, our research focuses on generating patterns from proteins with rigid prosthetic groups. Since these groups are key elements of protein active sites, the generated 3D patterns are expected to be biologically meaningful. In this paper, we present a new approach which allows the generation of 3D patterns from proteins with rigid prosthetic groups. Using 237 protein chains representing proteins containing porphyrin rings, our method was validated by comparing 3D templates generated from homologues with the 3D structure of the proteins they model. Atom positions were predicted reliably: 93% of them had an accuracy of 1.00 A or less. Moreover, similar results were obtained regarding chemical group and cavity positions. Results also suggested our system could contribute to the validation of 3D protein models. Finally, a 3D template was generated for the active site of human cytochrome P450 CYP17, the 3D structure of which is unknown. Its analysis showed that it is biologically meaningful: our method detected the main patterns of the cytochrome P450 superfamily and the motifs linked to catalytic reactions. The 3D template also suggested the position of a residue, which could be involved in a hydrogen bond with CYP17 substrates and the shape and location of a cavity. Comparisons with independently generated 3D models comforted these hypotheses. Alignment software (Nestor3D) is available at http://www.kingston.ac.uk/~ku33185/Nestor3D.html
Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori
NASA Astrophysics Data System (ADS)
Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.
2017-02-01
Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.
Generative Modeling for Machine Learning on the D-Wave
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thulasidasan, Sunil
These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
Automatic Texture Reconstruction of 3d City Model from Oblique Images
NASA Astrophysics Data System (ADS)
Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang
2016-06-01
In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.
Experiment for Integrating Dutch 3d Spatial Planning and Bim for Checking Building Permits
NASA Astrophysics Data System (ADS)
van Berlo, L.; Dijkmans, T.; Stoter, J.
2013-09-01
This paper presents a research project in The Netherlands in which several SMEs collaborated to create a 3D model of the National spatial planning information. This 2D information system described in the IMRO data standard holds implicit 3D information that can be used to generate an explicit 3D model. The project realized a proof of concept to generate a 3D spatial planning model. The team used the model to integrate it with several 3D Building Information Models (BIMs) described in the open data standard Industry Foundation Classes (IFC). Goal of the project was (1) to generate a 3D BIM model from spatial planning information to be used by the architect during the early design phase, and (2) allow 3D checking of building permits. The team used several technologies like CityGML, BIM clash detection and GeoBIM to explore the potential of this innovation. Within the project a showcase was created with a part of the spatial plan from the city of The Hague. Several BIM models were integrated in the 3D spatial plan of this area. A workflow has been described that demonstrates the benefits of collaboration between the spatial domain and the AEC industry in 3D. The research results in a showcase with conclusions and considerations for both national and international practice.
Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul
NASA Astrophysics Data System (ADS)
Buyuksalih, I.; Isikdag, U.; Zlatanova, S.
2013-08-01
3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.
GIS Data Based Automatic High-Fidelity 3D Road Network Modeling
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong
2011-01-01
3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks
Tactical 3D Model Generation using Structure-From-Motion on Video from Unmanned Systems
2015-04-01
available SfM application known as VisualSFM .6,7 VisualSFM is an end-user, “off-the-shelf” implementation of SfM that is easy to configure and used for...most 3D model generation applications from imagery. While the usual interface with VisualSFM is through their graphical user interface (GUI), we will be...of our system.5 There are two types of 3D model generation available within VisualSFM ; sparse and dense reconstruction. Sparse reconstruction begins
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.
Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei
2016-04-01
The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.
Research on complex 3D tree modeling based on L-system
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.
Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.
Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S
2015-01-01
Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons.
NASA Astrophysics Data System (ADS)
Sharkawi, K.-H.; Abdul-Rahman, A.
2013-09-01
Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0 to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).
Establishing a National 3d Geo-Data Model for Building Data Compliant to Citygml: Case of Turkey
NASA Astrophysics Data System (ADS)
Ates Aydar, S.; Stoter, J.; Ledoux, H.; Demir Ozbek, E.; Yomralioglu, T.
2016-06-01
This paper presents the generation of the 3D national building geo-data model of Turkey, which is compatible with the international OGC CityGML Encoding Standard. We prepare an ADE named CityGML-TRKBIS.BI that is produced by extending existing thematic modules of CityGML according to TRKBIS needs. All thematic data groups in TRKBIS geo-data model have been remodelled in order to generate the national large scale 3D geo-data model for Turkey. Specific attention has been paid to data groups that have different class structure according to related CityGML data themes such as building data model. Current 2D geo-information model for building data theme of Turkey (TRKBIS.BI) was established based on INSPIRE specifications for building (Core 2D and Extended 2D profiles), ISO/TC 211 standards and OGC web services. New version of TRKBIS.BI which is established according to semantic and geometric rules of CityGML will represent 2D-2.5D and 3D objects. After a short overview on generic approach, this paper describes extending CityGML building data theme according to TRKBIS.BI through several steps. First, building models of both standards were compared according to their data structure, classes and attributes. Second, CityGML building model was extended with respect to TRKBIS needs and CityGML-TRKBIS Building ADE was established in UML. This study provides new insights into 3D applications in Turkey. The generated 3D geo-data model for building thematic class will be used as a common exchange format that meets 2D, 2.5D and 3D implementation needs at national level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motionmore » model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less
Stereoscopic 3D graphics generation
NASA Astrophysics Data System (ADS)
Li, Zhi; Liu, Jianping; Zan, Y.
1997-05-01
Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.
A kinematic model for 3-D head-free gaze-shifts
Daemi, Mehdi; Crawford, J. Douglas
2015-01-01
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816
3D Face Modeling Using the Multi-Deformable Method
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-01-01
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
Simulating Local Area Network Protocols with the General Purpose Simulation System (GPSS)
1990-03-01
generation 15 3.1.2 Frame delivery . 15 3.2 Model artifices 16 3.3 Model variables 17 3.4 Simulation results 18 4. EXTERNAL PROCEDURES USED IN SIMULATION 19...46 15. Token Ring: Frame generation process 47 16. Token Ring: Frame delivery process 48 17 . Token Ring: Mean transfer delay vs mean throughput 49...assumed to be zero were replaced by the maximum values specified in the ANSI 802.3 standard (viz &MI=6, &M2=3, &M3= 17 , &D1=18, &D2=3, &D4=4, &D7=3, and
Modeling Images of Natural 3D Surfaces: Overview and Potential Applications
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre; Kuehnel, Frank; Stutz, John
2004-01-01
Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.
Applicability of three-dimensional imaging techniques in fetal medicine*
Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson
2016-01-01
Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540
NASA Astrophysics Data System (ADS)
Xu, Jiexin; Chen, Zhiwu; Xie, Jieshuo; Cai, Shuqun
2016-03-01
In this paper, the generation and evolution of seaward propagating internal solitary waves (ISWs) detected by satellite image in the northwestern South China Sea (SCS) are investigated by a fully nonlinear, non-hydrostatic, three-dimensional Massachusetts Institute of Technology general circulation model (MITgcm). The three-dimensional (3D) modeled ISWs agree favorably with those by satellite image, indicating that the observed seaward propagating ISWs may be generated by the interaction of barotropic tidal flow with the arc-like continental slope south of Hainan Island. Though the tidal current is basically in east-west direction, different types of internal waves are generated by tidal currents flowing over the slopes with different shaped shorelines. Over the slope where the shoreline is straight, only weak internal tides are generated; over the slope where the shoreline is seaward concave, large-amplitude internal bores are generated, and since the concave isobaths of the arc-like continental slope tend to focus the baroclinic tidal energy which is conveyed to the internal bores, the internal bores can efficiently disintegrate into a train of rank-ordered ISWs during their propagation away from the slope; while over the slope where the shoreline is seaward convex, no distinct internal tides are generated. It is also implied that the internal waves over the slope are generated due to mixed lee wave mechanism. Furthermore, the effects of 3D model, continental slope curvature, stratification, rotation and tidal forcing on the generation of ISWs are discussed, respectively. It is shown that, the amplitude and phase speed of ISWs derived from a two-dimensional (2D) model are smaller than those from the 3D one, and the 3D model has an advantage over 2D one in simulating the ISWs generated by the interaction between tidal currents and 3D curved continental slope; the reduced continental slope curvature hinders the extension of ISW crestline; both weaker stratification and rotation suppress the generation of ISWs; and the width of ISW crestline generated by K1 tidal harmonic is longer than that by M2 tidal harmonic.
Synthesis of image sequences for Korean sign language using 3D shape model
NASA Astrophysics Data System (ADS)
Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon
1995-05-01
This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.
Summary on several key techniques in 3D geological modeling.
Mei, Gang
2014-01-01
Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.
Generating Three-Dimensional Surface Models of Solid Objects from Multiple Projections.
1982-10-01
volume descriptions. The surface models are composed of curved, topologically rectangular, parametric patches. The data required to define these patches...geometry directly from image data .__ This method generates 3D surface descriptions of only those parts of the object that are illuminated by the pro- jected...objects. Generation of such models inherently requires the acquisition and analysis of 3D surface data . In this context, acquisition refers to the
Wang, Zhongmin; Liu, Yuhao; Luo, Hongxing; Gao, Chuanyu; Zhang, Jing; Dai, Yuya
2017-11-01
Three-dimensional (3D) printing is a newly-emerged technology converting a series of two-dimensional images to a touchable 3D model, but no studies have investigated whether or not a 3D printing model is better than a traditional cardiac model for medical education. A 3D printing cardiac model was generated using multi-slice computed tomography datasets. Thirty-four medical students were randomized to either the 3D Printing Group taught with the aid of a 3D printing cardiac model or the Traditional Model Group with a commonly used plastic cardiac model. Questionnaires with 10 medical questions and 3 evaluative questions were filled in by the students. A 3D printing cardiac model was successfully generated. Students in the 3D Printing Group were slightly quicker to answer all questions when compared with the Traditional Model Group (224.53 ± 44.13 s vs. 238.71 ± 68.46 s, p = 0.09), but the total score was not significantly different (6.24 ± 1.30 vs. 7.18 ± 1.70, p = 0.12). Neither the students'satisfaction (p = 0.48) nor their understanding of cardiac structures (p = 0.24) was significantly different between two groups. More students in the 3D Printing Group believed that they had understood at least 90% of teaching content (6 vs. 1). Both groups had 12 (70.6%) students who preferred a 3D printing model for medical education. A 3D printing model was not significantly superior to a traditional model in teaching cardiac diseases in our pilot randomized controlled study, yet more studies may be conducted to validate the real effect of 3D printing on medical education.
Wang, Zhongmin; Liu, Yuhao; Luo, Hongxing; Gao, Chuanyu; Zhang, Jing; Dai, Yuya
2017-01-01
Background Three-dimensional (3D) printing is a newly-emerged technology converting a series of two-dimensional images to a touchable 3D model, but no studies have investigated whether or not a 3D printing model is better than a traditional cardiac model for medical education. Methods A 3D printing cardiac model was generated using multi-slice computed tomography datasets. Thirty-four medical students were randomized to either the 3D Printing Group taught with the aid of a 3D printing cardiac model or the Traditional Model Group with a commonly used plastic cardiac model. Questionnaires with 10 medical questions and 3 evaluative questions were filled in by the students. Results A 3D printing cardiac model was successfully generated. Students in the 3D Printing Group were slightly quicker to answer all questions when compared with the Traditional Model Group (224.53 ± 44.13 s vs. 238.71 ± 68.46 s, p = 0.09), but the total score was not significantly different (6.24 ± 1.30 vs. 7.18 ± 1.70, p = 0.12). Neither the students’satisfaction (p = 0.48) nor their understanding of cardiac structures (p = 0.24) was significantly different between two groups. More students in the 3D Printing Group believed that they had understood at least 90% of teaching content (6 vs. 1). Both groups had 12 (70.6%) students who preferred a 3D printing model for medical education. Conclusions A 3D printing model was not significantly superior to a traditional model in teaching cardiac diseases in our pilot randomized controlled study, yet more studies may be conducted to validate the real effect of 3D printing on medical education. PMID:29167621
Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model
NASA Astrophysics Data System (ADS)
Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man
2017-03-01
Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
2015-06-15
Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCAmore » coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.« less
Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter
NASA Technical Reports Server (NTRS)
Belknap, Shannon; Zhang, Michael
2013-01-01
The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.
Summary on Several Key Techniques in 3D Geological Modeling
2014-01-01
Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029
3D Model Generation From the Engineering Drawing
NASA Astrophysics Data System (ADS)
Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav
2010-01-01
The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
SubductionGenerator: A program to build three-dimensional plate configurations
NASA Astrophysics Data System (ADS)
Jadamec, M. A.; Kreylos, O.; Billen, M. I.; Turcotte, D. L.; Knepley, M.
2016-12-01
Geologic, geochemical, and geophysical data from subduction zones indicate that a two-dimensional paradigm for plate tectonic boundaries is no longer adequate to explain the observations. Many open source software packages exist to simulate the viscous flow of the Earth, such as the dynamics of subduction. However, there are few open source programs that generate the three-dimensional model input. We present an open source software program, SubductionGenerator, that constructs the three-dimensional initial thermal structure and plate boundary structure. A 3D model mesh and tectonic configuration are constructed based on a user specified model domain, slab surface, seafloor age grid file, and shear zone surface. The initial 3D thermal structure for the plates and mantle within the model domain is then constructed using a series of libraries within the code that use a half-space cooling model, plate cooling model, and smoothing functions. The code maps the initial 3D thermal structure and the 3D plate interface onto the mesh nodes using a series of libraries including a k-d tree to increase efficiency. In this way, complicated geometries and multiple plates with variable thickness can be built onto a multi-resolution finite element mesh with a 3D thermal structure and 3D isotropic shear zones oriented at any angle with respect to the grid. SubductionGenerator is aimed at model set-ups more representative of the earth, which can be particularly challenging to construct. Examples include subduction zones where the physical attributes vary in space, such as slab dip and temperature, and overriding plate temperature and thickness. Thus, the program can been used to construct initial tectonic configurations for triple junctions and plate boundary corners.
Software for browsing sectioned images of a dog body and generating a 3D model.
Park, Jin Seo; Jung, Yong Wook
2016-01-01
The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models. © 2015 Wiley Periodicals, Inc.
Development of a model of the coronary arterial tree for the 4D XCAT phantom
NASA Astrophysics Data System (ADS)
Fung, George S. K.; Segars, W. Paul; Gullberg, Grant T.; Tsui, Benjamin M. W.
2011-09-01
A detailed three-dimensional (3D) model of the coronary artery tree with cardiac motion has great potential for applications in a wide variety of medical imaging research areas. In this work, we first developed a computer-generated 3D model of the coronary arterial tree for the heart in the extended cardiac-torso (XCAT) phantom, thereby creating a realistic computer model of the human anatomy. The coronary arterial tree model was based on two datasets: (1) a gated cardiac dual-source computed tomography (CT) angiographic dataset obtained from a normal human subject and (2) statistical morphometric data of porcine hearts. The initial proximal segments of the vasculature and the anatomical details of the boundaries of the ventricles were defined by segmenting the CT data. An iterative rule-based generation method was developed and applied to extend the coronary arterial tree beyond the initial proximal segments. The algorithm was governed by three factors: (1) statistical morphometric measurements of the connectivity, lengths and diameters of the arterial segments; (2) avoidance forces from other vessel segments and the boundaries of the myocardium, and (3) optimality principles which minimize the drag force at the bifurcations of the generated tree. Using this algorithm, the 3D computational model of the largest six orders of the coronary arterial tree was generated, which spread across the myocardium of the left and right ventricles. The 3D coronary arterial tree model was then extended to 4D to simulate different cardiac phases by deforming the original 3D model according to the motion vector map of the 4D cardiac model of the XCAT phantom at the corresponding phases. As a result, a detailed and realistic 4D model of the coronary arterial tree was developed for the XCAT phantom by imposing constraints of anatomical and physiological characteristics of the coronary vasculature. This new 4D coronary artery tree model provides a unique simulation tool that can be used in the development and evaluation of instrumentation and methods for imaging normal and pathological hearts with myocardial perfusion defects.
FaceTOON: a unified platform for feature-based cartoon expression generation
NASA Astrophysics Data System (ADS)
Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine
2008-02-01
This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.
Two-dimensional vocal tracts with three-dimensional behavior in the numerical generation of vowels.
Arnela, Marc; Guasch, Oriol
2014-01-01
Two-dimensional (2D) numerical simulations of vocal tract acoustics may provide a good balance between the high quality of three-dimensional (3D) finite element approaches and the low computational cost of one-dimensional (1D) techniques. However, 2D models are usually generated by considering the 2D vocal tract as a midsagittal cut of a 3D version, i.e., using the same radius function, wall impedance, glottal flow, and radiation losses as in 3D, which leads to strong discrepancies in the resulting vocal tract transfer functions. In this work, a four step methodology is proposed to match the behavior of 2D simulations with that of 3D vocal tracts with circular cross-sections. First, the 2D vocal tract profile becomes modified to tune the formant locations. Second, the 2D wall impedance is adjusted to fit the formant bandwidths. Third, the 2D glottal flow gets scaled to recover 3D pressure levels. Fourth and last, the 2D radiation model is tuned to match the 3D model following an optimization process. The procedure is tested for vowels /a/, /i/, and /u/ and the obtained results are compared with those of a full 3D simulation, a conventional 2D approach, and a 1D chain matrix model.
Medical 3D Printing for the Radiologist
Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.
2015-01-01
While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233
Medical 3D Printing for the Radiologist.
Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J
2015-01-01
While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. (©)RSNA, 2015.
3D deformable organ model based liver motion tracking in ultrasound videos
NASA Astrophysics Data System (ADS)
Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong
2013-03-01
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
Feasibility of fabricating personalized 3D-printed bone grafts guided by high-resolution imaging
NASA Astrophysics Data System (ADS)
Hong, Abigail L.; Newman, Benjamin T.; Khalid, Arbab; Teter, Olivia M.; Kobe, Elizabeth A.; Shukurova, Malika; Shinde, Rohit; Sipzner, Daniel; Pignolo, Robert J.; Udupa, Jayaram K.; Rajapakse, Chamith S.
2017-03-01
Current methods of bone graft treatment for critical size bone defects can give way to several clinical complications such as limited available bone for autografts, non-matching bone structure, lack of strength which can compromise a patient's skeletal system, and sterilization processes that can prevent osteogenesis in the case of allografts. We intend to overcome these disadvantages by generating a patient-specific 3D printed bone graft guided by high-resolution medical imaging. Our synthetic model allows us to customize the graft for the patients' macro- and microstructure and correct any structural deficiencies in the re-meshing process. These 3D-printed models can presumptively serve as the scaffolding for human mesenchymal stem cell (hMSC) engraftment in order to facilitate bone growth. We performed highresolution CT imaging of a cadaveric human proximal femur at 0.030-mm isotropic voxels. We used these images to generate a 3D computer model that mimics bone geometry from micro to macro scale represented by STereoLithography (STL) format. These models were then reformatted to a format that can be interpreted by the 3D printer. To assess how much of the microstructure was replicated, 3D-printed models were re-imaged using micro-CT at 0.025-mm isotropic voxels and compared to original high-resolution CT images used to generate the 3D model in 32 sub-regions. We found a strong correlation between 3D-printed bone volume and volume of bone in the original images used for 3D printing (R2 = 0.97). We expect to further refine our approach with additional testing to create a viable synthetic bone graft with clinical functionality.
Visualization of the variability of 3D statistical shape models by animation.
Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter
2004-01-01
Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.
Integrated Use of Remote Sensed Data and Numerical Cartography for the Generation of 3d City Models
NASA Astrophysics Data System (ADS)
Bitelli, G.; Girelli, V. A.; Lambertini, A.
2018-05-01
3D city models are becoming increasingly popular and important, because they constitute the base for all the visualization, planning, management operations regarding the urban infrastructure. These data are however not available in the majority of cities: in this paper, the possibility to use geospatial data of various kinds with the aim to generate 3D models in urban environment is investigated. In 3D modelling works, the starting data are frequently the 3D point clouds, which are nowadays possible to collect by different sensors mounted on different platforms: LiDAR, imagery from satellite, airborne or unmanned aerial vehicles, mobile mapping systems that integrate several sensors. The processing of the acquired data and consequently the obtainability of models able to provide geometric accuracy and a good visual impact is limited by time, costs and logistic constraints. Nowadays more and more innovative hardware and software solutions can offer to the municipalities and the public authorities the possibility to use available geospatial data, acquired for diverse aims, for the generation of 3D models of buildings and cities, characterized by different level of detail. In the paper two cases of study are presented, both regarding surveys carried out in Emilia Romagna region, Italy, where 2D or 2.5D numerical maps are available. The first one is about the use of oblique aerial images realized by the Municipality for a systematic documentation of the built environment, the second concerns the use of LiDAR data acquired for other purposes; in the two tests, these data were used in conjunction with large scale numerical maps to produce 3D city models.
Comprehending 3D Diagrams: Sketching to Support Spatial Reasoning.
Gagnier, Kristin M; Atit, Kinnari; Ormand, Carol J; Shipley, Thomas F
2017-10-01
Science, technology, engineering, and mathematics (STEM) disciplines commonly illustrate 3D relationships in diagrams, yet these are often challenging for students. Failing to understand diagrams can hinder success in STEM because scientific practice requires understanding and creating diagrammatic representations. We explore a new approach to improving student understanding of diagrams that convey 3D relations that is based on students generating their own predictive diagrams. Participants' comprehension of 3D spatial diagrams was measured in a pre- and post-design where students selected the correct 2D slice through 3D geologic block diagrams. Generating sketches that predicated the internal structure of a model led to greater improvement in diagram understanding than visualizing the interior of the model without sketching, or sketching the model without attempting to predict unseen spatial relations. In addition, we found a positive correlation between sketched diagram accuracy and improvement on the diagram comprehension measure. Results suggest that generating a predictive diagram facilitates students' abilities to make inferences about spatial relationships in diagrams. Implications for use of sketching in supporting STEM learning are discussed. Copyright © 2016 Cognitive Science Society, Inc.
To generate a finite element model of human thorax using the VCH dataset
NASA Astrophysics Data System (ADS)
Shi, Hui; Liu, Qian
2009-10-01
Purpose: To generate a three-dimensional (3D) finite element (FE) model of human thorax which may provide the basis of biomechanics simulation for the study of design effect and mechanism of safety belt when vehicle collision. Methods: Using manually or semi-manually segmented method, the interested area can be segmented from the VCH (Visible Chinese Human) dataset. The 3D surface model of thorax is visualized by using VTK (Visualization Toolkit) and further translated into (Stereo Lithography) STL format, which approximates the geometry of solid model by representing the boundaries with triangular facets. The data in STL format need to be normalized into NURBS surfaces and IGES format using software such as Geomagic Studio to provide archetype for reverse engineering. The 3D FE model was established using Ansys software. Results: The generated 3D FE model was an integrated thorax model which could reproduce human's complicated structure morphology including clavicle, ribs, spine and sternum. It was consisted of 1 044 179 elements in total. Conclusions: Compared with the previous thorax model, this FE model enhanced the authenticity and precision of results analysis obviously, which can provide a sound basis for analysis of human thorax biomechanical research. Furthermore, using the method above, we can also establish 3D FE models of some other organizes and tissues utilizing the VCH dataset.
Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data
NASA Astrophysics Data System (ADS)
Wang, Jie; Shen, Yuzhong
2011-03-01
3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.
Automated building of organometallic complexes from 3D fragments.
Foscato, Marco; Venkatraman, Vishwesh; Occhipinti, Giovanni; Alsberg, Bjørn K; Jensen, Vidar R
2014-07-28
A method for the automated construction of three-dimensional (3D) molecular models of organometallic species in design studies is described. Molecular structure fragments derived from crystallographic structures and accurate molecular-level calculations are used as 3D building blocks in the construction of multiple molecular models of analogous compounds. The method allows for precise control of stereochemistry and geometrical features that may otherwise be very challenging, or even impossible, to achieve with commonly available generators of 3D chemical structures. The new method was tested in the construction of three sets of active or metastable organometallic species of catalytic reactions in the homogeneous phase. The performance of the method was compared with those of commonly available methods for automated generation of 3D models, demonstrating higher accuracy of the prepared 3D models in general, and, in particular, a much wider range with respect to the kind of chemical structures that can be built automatically, with capabilities far beyond standard organic and main-group chemistry.
NASA Astrophysics Data System (ADS)
Gould, C. A.; Shammas, N. Y. A.; Grainger, S.; Taylor, I.; Simpson, K.
2012-06-01
This paper documents the 3D modeling and simulation of a three couple thermoelectric module using the Synopsys Technology Computer Aided Design (TCAD) semiconductor simulation software. Simulation results are presented for thermoelectric power generation, cooling and heating, and successfully demonstrate the basic thermoelectric principles. The 3D TCAD simulation model of a three couple thermoelectric module can be used in the future to evaluate different thermoelectric materials, device structures, and improve the efficiency and performance of thermoelectric modules.
Anatomical evaluation and stress distribution of intact canine femur.
Verim, Ozgur; Tasgetiren, Suleyman; Er, Mehmet S; Ozdemir, Vural; Yuran, Ahmet F
2013-03-01
In the biomedical field, three-dimensional (3D) modeling and analysis of bones and tissues has steadily gained in importance. The aim of this study was to produce more accurate 3D models of the canine femur derived from computed tomography (CT) data by using several modeling software programs and two different methods. The accuracy of the analysis depends on the modeling process and the right boundary conditions. Solidworks, Rapidform, Inventor, and 3DsMax software programs were used to create 3D models. Data derived from CT were converted into 3D models using two different methods: in the first, 3D models were generated using boundary lines, while in the second, 3D models were generated using point clouds. Stress analyses in the models were made by ANSYS v12, also considering any muscle forces acting on the canine femur. When stress values and statistical values were taken into consideration, more accurate models were obtained with the point cloud method. It was found that the maximum von Mises stress on the canine femur shaft was 34.8 MPa. Stress and accuracy values were obtained from the model formed using the Rapidform software. The values obtained were similar to those in other studies in the literature. Copyright © 2012 John Wiley & Sons, Ltd.
A 3D stand generator for central Appalachian hardwood forests
Jingxin Wang; Yaoxiang Li; Gary W. Miller
2002-01-01
A 3-dimensional (3D) stand generator was developed for central Appalachian hardwood forests. It was designed for a harvesting simulator to examine the interactions of stand, harvest, and machine. The Component Object Model (COM) was used to design and implement the program. Input to the generator includes species composition, stand density, and spatial pattern. Output...
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
Sandia MEMS Visualization Tools v. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor; Jorgensen, Craig R.; Young, Andrew I.
This is a revision to the Sandia MEMS Visualization Tools. It replaces all previous versions. New features in this version: Support for AutoCAD 2014 and 2015 . This CD contains an integrated set of electronic files that: a) Provides a 2D Process Visualizer that generates cross-section images of devices constructed using the SUMMiT V fabrication process. b) Provides a 3D Visualizer that generates 3D images of devices constructed using the SUMMiT V fabrication process. c) Provides a MEMS 3D Model generator that creates 3D solid models of devices constructed using the SUMMiT V fabrication process. While there exists some filesmore » on the CD that are used in conjunction with software package AutoCAD , these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less
The Use of Uas for Rapid 3d Mapping in Geomatics Education
NASA Astrophysics Data System (ADS)
Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan
2016-06-01
With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.
Geospatial Modelling Approach for 3d Urban Densification Developments
NASA Astrophysics Data System (ADS)
Koziatek, O.; Dragićević, S.; Li, S.
2016-06-01
With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.
Ball, A D; Job, P A; Walker, A E L
2017-08-01
The method we present here uses a scanning electron microscope programmed via macros to automatically capture dozens of images at suitable angles to generate accurate, detailed three-dimensional (3D) surface models with micron-scale resolution. We demonstrate that it is possible to use these Scanning Electron Microscope (SEM) images in conjunction with commercially available software originally developed for photogrammetry reconstructions from Digital Single Lens Reflex (DSLR) cameras and to reconstruct 3D models of the specimen. These 3D models can then be exported as polygon meshes and eventually 3D printed. This technique offers the potential to obtain data suitable to reconstruct very tiny features (e.g. diatoms, butterfly scales and mineral fabrics) at nanometre resolution. Ultimately, we foresee this as being a useful tool for better understanding spatial relationships at very high resolution. However, our motivation is also to use it to produce 3D models to be used in public outreach events and exhibitions, especially for the blind or partially sighted. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J
2006-11-01
The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
Identifying novel sequence variants of RNA 3D motifs
Zirbel, Craig L.; Roll, James; Sweeney, Blake A.; Petrov, Anton I.; Pirrung, Meg; Leontis, Neocles B.
2015-01-01
Predicting RNA 3D structure from sequence is a major challenge in biophysics. An important sub-goal is accurately identifying recurrent 3D motifs from RNA internal and hairpin loop sequences extracted from secondary structure (2D) diagrams. We have developed and validated new probabilistic models for 3D motif sequences based on hybrid Stochastic Context-Free Grammars and Markov Random Fields (SCFG/MRF). The SCFG/MRF models are constructed using atomic-resolution RNA 3D structures. To parameterize each model, we use all instances of each motif found in the RNA 3D Motif Atlas and annotations of pairwise nucleotide interactions generated by the FR3D software. Isostericity relations between non-Watson–Crick basepairs are used in scoring sequence variants. SCFG techniques model nested pairs and insertions, while MRF ideas handle crossing interactions and base triples. We use test sets of randomly-generated sequences to set acceptance and rejection thresholds for each motif group and thus control the false positive rate. Validation was carried out by comparing results for four motif groups to RMDetect. The software developed for sequence scoring (JAR3D) is structured to automatically incorporate new motifs as they accumulate in the RNA 3D Motif Atlas when new structures are solved and is available free for download. PMID:26130723
Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images
NASA Astrophysics Data System (ADS)
Jeong, J.; Kim, T.
2016-06-01
Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.
Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane
2018-05-30
- Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.
Cone beam computed tomography of plastinated hearts for instruction of radiological anatomy.
Chang, Chih-Wei; Atkinson, Gregory; Gandhi, Niket; Farrell, Michael L; Labrash, Steven; Smith, Alice B; Norton, Neil S; Matsui, Takashi; Lozanoff, Scott
2016-09-01
Radiological anatomy education is an important aspect of the medical curriculum. The purpose of this study was to establish and demonstrate the use of plastinated anatomical specimens, specifically human hearts, for use in radiological anatomy education. Four human hearts were processed with routine plastination procedures at room temperature. Specimens were subjected to cone beam computed tomography and a graphics program (ER3D) was applied to generate 3D cardiac models. A comparison was conducted between plastinated hearts and their corresponding computer models based on a list of morphological cardiac features commonly studied in the gross anatomy laboratory. Results showed significant correspondence between plastinations and CBCT-generated 3D models (98 %; p < .01) for external structures and 100 % for internal cardiac features, while 85 % correspondence was achieved between plastinations and 2D CBCT slices. Complete correspondence (100 %) was achieved between key observations on the plastinations and internal radiological findings typically required of medical student. All pathologic features seen on the plastinated hearts were also visualized internally with the CBCT-generated models and 2D slices. These results suggest that CBCT-derived slices and models can be successfully generated from plastinated material and provide accurate representations for radiological anatomy education.
Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C
2003-01-01
The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.
Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.
Jiang, Z; Chen, W; Burkhart, C
2013-11-01
Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
MOD3D: a model for incorporating MODTRAN radiative transfer into 3D simulations
NASA Astrophysics Data System (ADS)
Berk, Alexander; Anderson, Gail P.; Gossage, Brett N.
2001-08-01
MOD3D, a rapid and accurate radiative transport algorithm, is being developed for application to 3D simulations. MOD3D couples to optical property databases generated by the MODTRAN4 Correlated-k (CK) band model algorithm. The Beer's Law dependence of the CK algorithm provides for proper coupling of illumination and line-of-sight paths. Full 3D spatial effects are modeled by scaling and interpolating optical data to local conditions. A C++ version of MOD3D has been integrated into JMASS for calculation of path transmittances, thermal emission and single scatter solar radiation. Results from initial validation efforts are presented.
Accuracy assessment of building point clouds automatically generated from iphone images
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2014-06-01
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid
2005-06-01
The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.
The potential of 3D techniques for cultural heritage object documentation
NASA Astrophysics Data System (ADS)
Bitelli, Gabriele; Girelli, Valentina A.; Remondino, Fabio; Vittuari, Luca
2007-01-01
The generation of 3D models of objects has become an important research point in many fields of application like industrial inspection, robotics, navigation and body scanning. Recently the techniques for generating photo-textured 3D digital models have interested also the field of Cultural Heritage, due to their capability to combine high precision metrical information with a qualitative and photographic description of the objects. In fact this kind of product is a fundamental support for documentation, studying and restoration of works of art, until a production of replicas by fast prototyping techniques. Close-range photogrammetric techniques are nowadays more and more frequently used for the generation of precise 3D models. With the advent of automated procedures and fully digital products in the 1990s, it has become easier to use and cheaper, and nowadays a wide range of commercial software is available to calibrate, orient and reconstruct objects from images. This paper presents the complete process for the derivation of a photorealistic 3D model of an important basalt stela (about 70 x 60 x 25 cm) discovered in the archaeological site of Tilmen Höyük, in Turkey, dating back to 2nd mill. BC. We will report the modeling performed using passive and active sensors and the comparison of the achieved results.
Bioengineered silk scaffolds in 3D tissue modeling with focus on mammary tissues.
Maghdouri-White, Yas; Bowlin, Gary L; Lemmon, Christopher A; Dréau, Didier
2016-02-01
In vitro generation of three-dimensional (3D) biological tissues and organ-like structures is a promising strategy to study and closely model complex aspects of the molecular, cellular, and physiological interactions of tissue. In particular, in vitro 3D tissue modeling holds promises to further our understanding of breast development. Indeed, biologically relevant 3D structures that combine mammary cells and engineered matrices have improved our knowledge of mammary tissue growth, organization, and differentiation. Several polymeric biomaterials have been used as scaffolds to engineer 3D mammary tissues. Among those, silk fibroin-based biomaterials have many biologically relevant properties and have been successfully used in multiple medical applications. Here, we review the recent advances in engineered scaffolds with an emphasis on breast-like tissue generation and the benefits of modified silk-based scaffolds. Copyright © 2015 Elsevier B.V. All rights reserved.
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.
Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin
2015-01-01
Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.
A THREE-DIMENSIONAL BABCOCK-LEIGHTON SOLAR DYNAMO MODEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miesch, Mark S.; Dikpati, Mausumi, E-mail: miesch@ucar.edu
We present a three-dimensional (3D) kinematic solar dynamo model in which poloidal field is generated by the emergence and dispersal of tilted sunspot pairs (more generally bipolar magnetic regions, or BMRs). The axisymmetric component of this model functions similarly to previous 2.5 dimensional (2.5D, axisymmetric) Babcock-Leighton (BL) dynamo models that employ a double-ring prescription for poloidal field generation but we generalize this prescription into a 3D flux emergence algorithm that places BMRs on the surface in response to the dynamo-generated toroidal field. In this way, the model can be regarded as a unification of BL dynamo models (2.5D in radius/latitude)more » and surface flux transport models (2.5D in latitude/longitude) into a more self-consistent framework that builds on the successes of each while capturing the full 3D structure of the evolving magnetic field. The model reproduces some basic features of the solar cycle including an 11 yr periodicity, equatorward migration of toroidal flux in the deep convection zone, and poleward propagation of poloidal flux at the surface. The poleward-propagating surface flux originates as trailing flux in BMRs, migrates poleward in multiple non-axisymmetric streams (made axisymmetric by differential rotation and turbulent diffusion), and eventually reverses the polar field, thus sustaining the dynamo. In this Letter we briefly describe the model, initial results, and future plans.« less
Reddy, M V; Eachempati, Krishnakiran; Gurava Reddy, A V; Mugalur, Aakash
2018-01-01
Rapid prototyping (RP) is used widely in dental and faciomaxillary surgery with anecdotal uses in orthopedics. The purview of RP in orthopedics is vast. However, there is no error analysis reported in the literature on bone models generated using office-based RP. This study evaluates the accuracy of fused deposition modeling (FDM) using standard tessellation language (STL) files and errors generated during the fabrication of bone models. Nine dry bones were selected and were computed tomography (CT) scanned. STL files were procured from the CT scans and three-dimensional (3D) models of the bones were printed using our in-house FDM based 3D printer using Acrylonitrile Butadiene Styrene (ABS) filament. Measurements were made on the bone and 3D models according to data collection procedures for forensic skeletal material. Statistical analysis was performed to establish interobserver co-relation for measurements on dry bones and the 3D bone models. Statistical analysis was performed using SPSS version 13.0 software to analyze the collected data. The inter-observer reliability was established using intra-class coefficient for both the dry bones and the 3D models. The mean of absolute difference is 0.4 that is very minimal. The 3D models are comparable to the dry bones. STL file dependent FDM using ABS material produces near-anatomical 3D models. The high 3D accuracy hold a promise in the clinical scenario for preoperative planning, mock surgery, and choice of implants and prostheses, especially in complicated acetabular trauma and complex hip surgeries.
Maschio, Federico; Pandya, Mirali; Olszewski, Raphael
2016-03-22
The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field.
Maschio, Federico; Pandya, Mirali; Olszewski, Raphael
2016-01-01
Background The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Material/Methods Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. Results The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Conclusions Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456
D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System
NASA Astrophysics Data System (ADS)
Kang, J.; Lee, I.
2016-06-01
Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.
Generation and use of human 3D-CAD models
NASA Astrophysics Data System (ADS)
Grotepass, Juergen; Speyer, Hartmut; Kaiser, Ralf
2002-05-01
Individualized Products are one of the ten mega trends of the 21st Century with human modeling as the key issue for tomorrow's design and product development. The use of human modeling software for computer based ergonomic simulations within the production process increases quality while reducing costs by 30- 50 percent and shortening production time. This presentation focuses on the use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production. Today, the entire production chain can be designed, individualized models generated and analyzed in 3D computer environments. Anthropometric design for ergonomics is matched to human needs, thus preserving health. Ergonomic simulation includes topics as human vision, reachability, kinematics, force and comfort analysis and international design capabilities. In German more than 17 billions of Mark are moved to other industries, because clothes do not fit. Individual clothing tailored to the customer's preference means surplus value, pleasure and perfect fit. The body scanning technology is the key to generation and use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production.
A 3D human neural cell culture system for modeling Alzheimer’s disease
Kim, Young Hye; Choi, Se Hoon; D’Avanzo, Carla; Hebisch, Matthias; Sliwinski, Christopher; Bylykbashi, Enjana; Washicosky, Kevin J.; Klee, Justin B.; Brüstle, Oliver; Tanzi, Rudolph E.; Kim, Doo Yeon
2015-01-01
Stem cell technologies have facilitated the development of human cellular disease models that can be used to study pathogenesis and test therapeutic candidates. These models hold promise for complex neurological diseases such as Alzheimer’s disease (AD) because existing animal models have been unable to fully recapitulate all aspects of pathology. We recently reported the characterization of a novel three-dimensional (3D) culture system that exhibits key events in AD pathogenesis, including extracellular aggregation of β-amyloid and accumulation of hyperphosphorylated tau. Here we provide instructions for the generation and analysis of 3D human neural cell cultures, including the production of genetically modified human neural progenitor cells (hNPCs) with familial AD mutations, the differentiation of the hNPCs in a 3D matrix, and the analysis of AD pathogenesis. The 3D culture generation takes 1–2 days. The aggregation of β-amyloid is observed after 6-weeks of differentiation followed by robust tau pathology after 10–14 weeks. PMID:26068894
NASA Astrophysics Data System (ADS)
Verhoeven, G. J.
2017-08-01
Since a few years, structure-from-motion and multi-view stereo pipelines have become omnipresent in the cultural heritage domain. The fact that such Image-Based Modelling (IBM) approaches are capable of providing a photo-realistic texture along the threedimensional (3D) digital surface geometry is often considered a unique selling point, certainly for those cases that aim for a visually pleasing result. However, this texture can very often also obscure the underlying geometrical details of the surface, making it very hard to assess the morphological features of the digitised artefact or scene. Instead of constantly switching between the textured and untextured version of the 3D surface model, this paper presents a new method to generate a morphology-enhanced colour texture for the 3D polymesh. The presented approach tries to overcome this switching between objects visualisations by fusing the original colour texture data with a specific depiction of the surface normals. Whether applied to the original 3D surface model or a lowresolution derivative, this newly generated texture does not solely convey the colours in a proper way but also enhances the smalland large-scale spatial and morphological features that are hard or impossible to perceive in the original textured model. In addition, the technique is very useful for low-end 3D viewers, since no additional memory and computing capacity are needed to convey relief details properly. Apart from simple visualisation purposes, the textured 3D models are now also better suited for on-surface interpretative mapping and the generation of line drawings.
NASA Astrophysics Data System (ADS)
Tsinnajinnie, L.; Frisbee, M. D.; Wilson, J. L.
2017-12-01
A conceptual model of hydrostratigraphic and structural influences on 3D streamflow generation processes is tested in the Whiskey Creek watershed located in the Chuska Mountains of the Navajo Nation along the northern NM/AZ border. The role of hydrostratigraphy and structure in groundwater processes has been well studied. However, influences of heterogeneity due to geologic structure and stratigraphy of mountain blocks on 3D streamflow generation has received less attention. Three-dimensional flow in mountainous watersheds, such as Saguache Creek (CO) and Rio Hondo (NM), contributes significant amounts of groundwater from deep circulation to streamflow. This fully 3D conceptual model is fundamentally different than watersheds characterized as 2D, those dominated by surface and shallow subsurface runoff, because 3D watersheds can have much longer flowpaths and mean residence times (up to 1000s of years). In contrast to Saguache Creek (volcanic bedrock) and Rio Hondo (crystalline metamorphic), the bedrock geology of the watersheds draining the Chuska Mountains is primarily comprised of sedimentary bedrock capped by extrusive volcanics. We test this conceptual model using a combination of stream gauging, tritium analyses, and endmember mixing analysis (EMMA) on the general ion chemistry and stable isotope composition of water samples collected in 2013-2016. Springs that emerge from the Chuska Sandstone are tritium dead indicative of a large component of pre-bomb pulse water in discharge and deeper 3D flow. EMMA indicates that most streamflow is generated from groundwater emerging from the Chuska Sandstone. Gaining/losing conditions in Whiskey Creek are strongly related to hydrostratigraphy as evidenced by a transition from gaining conditions largely found in the Chuska Sandstone to losing conditions where the underlying Chinle Formation outcrops. Although tritium in Whiskey Creek suggests 3D interactions are present, hydrostratigraphic and structural controls may limit the occurrence of longer residence times and longer flow paths. Mountainous watersheds similar to the 3D hydrostratigraphic and structurally controlled models will exhibit different responses to perturbations, such as climate change, than watersheds that fit existing 2D and 3D conceptual models.
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.
2016-10-01
The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.
2015-01-01
The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines. PMID:25780759
Newe, Axel
2015-01-01
The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.
Automatic 3D virtual scenes modeling for multisensors simulation
NASA Astrophysics Data System (ADS)
Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu
2006-05-01
SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.
U.S. Geological Survey: A synopsis of Three-dimensional Modeling
Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.
2011-01-01
The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
2014-03-01
information modeling guide series: 03—GSA BIM guide for 3D imaging (Ver. 1). Retrieved from http://www.gsa.gov/graphics/pbs/GSA_BIM_Guide_Series_03... model during a KVA knowledge audit at FRC San Diego. The information used in the creation of his KVA models was generated from the SME-provided...Kenney then used the information gathered during SME interviews to reengineer the process to include 3D printing to form his “to-be” model . The
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
Applications of 3D printing in cardiovascular diseases.
Giannopoulos, Andreas A; Mitsouras, Dimitris; Yoo, Shi-Joon; Liu, Peter P; Chatzizisis, Yiannis S; Rybicki, Frank J
2016-12-01
3D-printed models fabricated from CT, MRI, or echocardiography data provide the advantage of haptic feedback, direct manipulation, and enhanced understanding of cardiovascular anatomy and underlying pathologies. Reported applications of cardiovascular 3D printing span from diagnostic assistance and optimization of management algorithms in complex cardiovascular diseases, to planning and simulating surgical and interventional procedures. The technology has been used in practically the entire range of structural, valvular, and congenital heart diseases, and the added-value of 3D printing is established. Patient-specific implants and custom-made devices can be designed, produced, and tested, thus opening new horizons in personalized patient care and cardiovascular research. Physicians and trainees can better elucidate anatomical abnormalities with the use of 3D-printed models, and communication with patients is markedly improved. Cardiovascular 3D bioprinting and molecular 3D printing, although currently not translated into clinical practice, hold revolutionary potential. 3D printing is expected to have a broad influence in cardiovascular care, and will prove pivotal for the future generation of cardiovascular imagers and care providers. In this Review, we summarize the cardiovascular 3D printing workflow, from image acquisition to the generation of a hand-held model, and discuss the cardiovascular applications and the current status and future perspectives of cardiovascular 3D printing.
Plot Scale Factor Models for Standard Orthographic Views
ERIC Educational Resources Information Center
Osakue, Edward E.
2007-01-01
Geometric modeling provides graphic representations of real or abstract objects. Realistic representation requires three dimensional (3D) attributes since natural objects have three principal dimensions. CAD software gives the user the ability to construct realistic 3D models of objects, but often prints of these models must be generated on two…
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Joint sparse learning for 3-D facial expression generation.
Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Bu, Jiajun
2013-08-01
3-D facial expression generation, including synthesis and retargeting, has received intensive attentions in recent years, because it is important to produce realistic 3-D faces with specific expressions in modern film production and computer games. In this paper, we present joint sparse learning (JSL) to learn mapping functions and their respective inverses to model the relationship between the high-dimensional 3-D faces (of different expressions and identities) and their corresponding low-dimensional representations. Based on JSL, we can effectively and efficiently generate various expressions of a 3-D face by either synthesizing or retargeting. Furthermore, JSL is able to restore 3-D faces with holes by learning a mapping function between incomplete and intact data. Experimental results on a wide range of 3-D faces demonstrate the effectiveness of the proposed approach by comparing with representative ones in terms of quality, time cost, and robustness.
D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality
NASA Astrophysics Data System (ADS)
Hwang, Jin-Tsong; Chu, Ting-Chen
2016-10-01
This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.
NASA Astrophysics Data System (ADS)
Li, W.; Su, Y.; Harmon, T. C.; Guo, Q.
2013-12-01
Light Detection and Ranging (lidar) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant object. Due to its ability to generate 3-dimensional data with high spatial resolution and accuracy, lidar technology is being increasingly used in ecology, geography, geology, geomorphology, seismology, remote sensing, and atmospheric physics. In this study we construct a 3-dimentional (3D) radiative transfer model (RTM) using lidar data to simulate the spatial distribution of solar radiation (direct and diffuse) on the surface of water and mountain forests. The model includes three sub-models: a light model simulating the light source, a sensor model simulating the camera, and a scene model simulating the landscape. We use ground-based and airborne lidar data to characterize the 3D structure of the study area, and generate a detailed 3D scene model. The interactions between light and object are simulated using the Monte Carlo Ray Tracing (MCRT) method. A large number of rays are generated from the light source. For each individual ray, the full traveling path is traced until it is absorbed or escapes from the scene boundary. By locating the sensor at different positions and directions, we can simulate the spatial distribution of solar energy at the ground, vegetation and water surfaces. These outputs can then be incorporated into meteorological drivers for hydrologic and energy balance models to improve our understanding of hydrologic processes and ecosystem functions.
Patient-specific indirectly 3D printed mitral valves for pre-operative surgical modelling
NASA Astrophysics Data System (ADS)
Ginty, Olivia; Moore, John; Xia, Wenyao; Bainbridge, Dan; Peters, Terry
2017-03-01
Significant mitral valve regurgitation affects over 2% of the population. Over the past few decades, mitral valve (MV) repair has become the preferred treatment option, producing better patient outcomes than MV replacement, but requiring more expertise. Recently, 3D printing has been used to assist surgeons in planning optimal treatments for complex surgery, thus increasing the experience of surgeons and the success of MV repairs. However, while commercially available 3D printers are capable of printing soft, tissue-like material, they cannot replicate the demanding combination of echogenicity, physical flexibility and strength of the mitral valve. In this work, we propose the use of trans-esophageal echocardiography (TEE) 3D image data and inexpensive 3D printing technology to create patient specific mitral valve models. Patient specific 3D TEE images were segmented and used to generate a profile of the mitral valve leaflets. This profile was 3D printed and integrated into a mold to generate a silicone valve model that was placed in a dynamic heart phantom. Our primary goal is to use silicone models to assess different repair options prior to surgery, in the hope of optimizing patient outcomes. As a corollary, a database of patient specific models can then be used as a trainer for new surgeons, using a beating heart simulator to assess success. The current work reports preliminary results, quantifying basic morphological properties. The models were assessed using 3D TEE images, as well as 2D and 3D Doppler images for comparison to the original patient TEE data.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
Virtual 3d City Modeling: Techniques and Applications
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.
NASA Astrophysics Data System (ADS)
Bittner, K.; d'Angelo, P.; Körner, M.; Reinartz, P.
2018-05-01
Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the digital surface models (DSMs). The DSMs can be obtained either by light detection and ranging (LIDAR), SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a conditional generative adversarial network (cGAN) to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.
Carreau, Joseph H; Bastrom, Tracey; Petcharaporn, Maty; Schulte, Caitlin; Marks, Michelle; Illés, Tamás; Somoskeöy, Szabolcs; Newton, Peter O
2014-03-01
Reproducibility study of SterEOS 3-dimensional (3D) software in large, idiopathic scoliosis (IS) spinal curves. To determine the accuracy and reproducibility of various 3D, software-generated radiographic measurements acquired from a 2-dimensional (2D) imaging system. SterEOS software allows a user to reconstruct a 3D spinal model from an upright, biplanar, low-dose, X-ray system. The validity and internal consistency of this system have not been tested in large IS curves. EOS images from 30 IS patients with curves greater than 50° were collected for analysis. Three observers blinded to the study protocol conducted repeated, randomized, manual 2D measurements, and 3D software generated measurements from biplanar images acquired from an EOS Imaging system. Three-dimensional measurements were repeated using both the Full 3D and Fast 3D guided processes. A total of 180 (120 3D and 60 2D) sets of measurements were obtained of coronal (Cobb angle) and sagittal (T1-T12 and T4-T12 kyphosis; L1-S1 and L1-L5; and pelvic tilt, pelvic incidence, and sacral slope) parameters. Intra-class correlation coefficients were compared, as were the calculated differences in values generated by SterEOS 3D software and manual 2D measurements. The 95% confidence intervals of the mean differences in measures were calculated as an estimate of reproducibility. Average intra-class correlation coefficients were excellent: 0.97, 0.97, and 0.93 for Full 3D, Fast 3D, and 2D measures, respectively (p = .11). Measurement errors for some sagittal measures were significantly lower with the 3D techniques. Both the Full 3D and Fast 3D techniques provided consistent measurements of axial plane vertebral rotation. SterEOS 3D reconstruction spine software creates reproducible measurements in all 3 planes of deformity in curves greater than 50°. Advancements in 3D scoliosis imaging are expected to improve our understanding and treatment of idiopathic scoliosis. Copyright © 2014 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.
Effective visibility analysis method in virtual geographic environment
NASA Astrophysics Data System (ADS)
Li, Yi; Zhu, Qing; Gong, Jianhua
2008-10-01
Visibility analysis in virtual geographic environment has broad applications in many aspects in social life. But in practical use it is urged to improve the efficiency and accuracy, as well as to consider human vision restriction. The paper firstly introduces a high-efficient 3D data modeling method, which generates and organizes 3D data model using R-tree and LOD techniques. Then a new visibility algorithm which can realize real-time viewshed calculation considering the shelter of DEM and 3D building models and some restrictions of human eye to the viewshed generation. Finally an experiment is conducted to prove the visibility analysis calculation quickly and accurately which can meet the demand of digital city applications.
Algebraic Structure of tt * Equations for Calabi-Yau Sigma Models
NASA Astrophysics Data System (ADS)
Alim, Murad
2017-08-01
The tt * equations define a flat connection on the moduli spaces of {2d, \\mathcal{N}=2} quantum field theories. For conformal theories with c = 3 d, which can be realized as nonlinear sigma models into Calabi-Yau d-folds, this flat connection is equivalent to special geometry for threefolds and to its analogs in other dimensions. We show that the non-holomorphic content of the tt * equations, restricted to the conformal directions, in the cases d = 1, 2, 3 is captured in terms of finitely many generators of special functions, which close under derivatives. The generators are understood as coordinates on a larger moduli space. This space parameterizes a freedom in choosing representatives of the chiral ring while preserving a constant topological metric. Geometrically, the freedom corresponds to a choice of forms on the target space respecting the Hodge filtration and having a constant pairing. Linear combinations of vector fields on that space are identified with the generators of a Lie algebra. This Lie algebra replaces the non-holomorphic derivatives of tt * and provides these with a finer and algebraic meaning. For sigma models into lattice polarized K3 manifolds, the differential ring of special functions on the moduli space is constructed, extending known structures for d = 1 and 3. The generators of the differential rings of special functions are given by quasi-modular forms for d = 1 and their generalizations in d = 2, 3. Some explicit examples are worked out including the case of the mirror of the quartic in {\\mathbbm{P}^3}, where due to further algebraic constraints, the differential ring coincides with quasi modular forms.
3D-Lab: a collaborative web-based platform for molecular modeling.
Grebner, Christoph; Norrby, Magnus; Enström, Jonatan; Nilsson, Ingemar; Hogner, Anders; Henriksson, Jonas; Westin, Johan; Faramarzi, Farzad; Werner, Philip; Boström, Jonas
2016-09-01
The use of 3D information has shown impact in numerous applications in drug design. However, it is often under-utilized and traditionally limited to specialists. We want to change that, and present an approach making 3D information and molecular modeling accessible and easy-to-use 'for the people'. A user-friendly and collaborative web-based platform (3D-Lab) for 3D modeling, including a blazingly fast virtual screening capability, was developed. 3D-Lab provides an interface to automatic molecular modeling, like conformer generation, ligand alignments, molecular dockings and simple quantum chemistry protocols. 3D-Lab is designed to be modular, and to facilitate sharing of 3D-information to promote interactions between drug designers. Recent enhancements to our open-source virtual reality tool Molecular Rift are described. The integrated drug-design platform allows drug designers to instantaneously access 3D information and readily apply advanced and automated 3D molecular modeling tasks, with the aim to improve decision-making in drug design projects.
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Luo, Y.; Morency, C.; Tromp, J.
2008-12-01
Seismic-wave propagation in exploration-industry settings has seen major research and development efforts for decades, yet large-scale applications have often been limited to 2D or 3D finite-difference, (visco- )acoustic wave propagation due to computational limitations. We explore the possibility of including all relevant physical signatures in the wavefield using the spectral- element method (SPECFEM3D, SPECFEM2D), thereby accounting for acoustic, (visco-)elastic, poroelastic, anisotropic wave propagation in meshes which honor all crucial discontinuities. Mesh design is the crux of the problem, and we use CUBIT (Sandia Laboratories) to generate unstructured quadrilateral 2D and hexahedral 3D meshes for these complex background models. While general hexahedral mesh generation is an unresolved problem, we are able to accommodate most of the relevant settings (e.g., layer-cake models, salt bodies, overthrusting faults, and strong topography) with respectively tailored workflows. 2D simulations show localized, characteristic wave effects due to these features that shall be helpful in designing survey acquisition geometries in a relatively economic fashion. We address some of the fundamental issues this comprehensive modeling approach faces regarding its feasibility: Assessing geological structures in terms of the necessity to honor the major structural units, appropriate velocity model interpolation, quality control of the resultant mesh, and computational cost for realistic settings up to frequencies of 40 Hz. The solution to this forward problem forms the basis for subsequent 2D and 3D adjoint tomography within this context, which is the subject of a companion paper.
Newe, Axel; Ganslandt, Thomas
2013-01-01
The usefulness of the 3D Portable Document Format (PDF) for clinical, educational, and research purposes has recently been shown. However, the lack of a simple tool for converting biomedical data into the model data in the necessary Universal 3D (U3D) file format is a drawback for the broad acceptance of this new technology. A new module for the image processing and rapid prototyping framework MeVisLab does not only provide a platform-independent possibility to create surface meshes out of biomedical/DICOM and other data and to export them into U3D--it also lets the user add meta data to these meshes to predefine colors and names that can be processed by a PDF authoring software while generating 3D PDF files. Furthermore, the source code of the respective module is available and well documented so that it can easily be modified for own purposes.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
Koban, K C; Leitsch, S; Holzbach, T; Volkmer, E; Metz, P M; Giunta, R E
2014-04-01
A new approach of using photographs from smartphones for three-dimensional (3D) imaging was introduced besides the standard high quality 3D camera systems. In this work, we investigated different capture preferences and compared the accuracy of this 3D reconstruction method with manual tape measurement and an established commercial 3D camera system. The facial region of one plastic mannequin head was labelled with 21 landmarks. A 3D reference model was captured with the Vectra 3D Imaging System®. In addition, 3D imaging was executed with the Autodesk 123d Catch® application using 16, 12, 9, 6 and 3 pictures from Apple® iPhone 4 s® and iPad® 3rd generation. The accuracy of 3D reconstruction was measured in 2 steps. First, 42 distance measurements from manual tape measurement and the 2 digital systems were compared. Second, the surface-to-surface deviation of different aesthetic units from the Vectra® reference model to Catch® generated models was analysed. For each 3D system the capturing and processing time was measured. The measurement showed no significant (p>0.05) difference between manual tape measurement and both digital distances from the Catch® application and Vectra®. Surface-to-surface deviation to the Vectra® reference model showed sufficient results for the 3D reconstruction of Catch® with 16, 12 and 9 picture sets. Use of 6 and 3 pictures resulted in large deviations. Lateral aesthetic units showed higher deviations than central units. Catch® needed 5 times longer to capture and compute 3D models (average 10 min vs. 2 min). The Autodesk 123d Catch® computed models suggests good accuracy of the 3D reconstruction for a standard mannequin model, in comparison to manual tape measurement and the surface-to-surface analysis with a 3D reference model. However, the prolonged capture time with multiple pictures is prone to errors. Further studies are needed to investigate its application and quality in capturing volunteer models. Soon mobile applications may offer an alternative for plastic surgeons to today's cost intensive, stationary 3D camera systems. © Georg Thieme Verlag KG Stuttgart · New York.
Tsao, Liuxing; Ma, Liang
2016-11-01
Digital human modelling enables ergonomists and designers to consider ergonomic concerns and design alternatives in a timely and cost-efficient manner in the early stages of design. However, the reliability of the simulation could be limited due to the percentile-based approach used in constructing the digital human model. To enhance the accuracy of the size and shape of the models, we proposed a framework to generate digital human models using three-dimensional (3D) anthropometric data. The 3D scan data from specific subjects' hands were segmented based on the estimated centres of rotation. The segments were then driven in forward kinematics to perform several functional postures. The constructed hand models were then verified, thereby validating the feasibility of the framework. The proposed framework helps generate accurate subject-specific digital human models, which can be utilised to guide product design and workspace arrangement. Practitioner Summary: Subject-specific digital human models can be constructed under the proposed framework based on three-dimensional (3D) anthropometry. This approach enables more reliable digital human simulation to guide product design and workspace arrangement.
a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight
NASA Astrophysics Data System (ADS)
Yao, C.; Peng, G.; Song, Y.; Duan, M.
2017-09-01
The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N. A. Anderson; P. Sabharwall
2014-01-01
The Next Generation Nuclear Plant project is aimed at the research and development of a helium-cooled high-temperature gas reactor that could generate both electricity and process heat for the production of hydrogen. The heat from the high-temperature primary loop must be transferred via an intermediate heat exchanger to a secondary loop. Using RELAP5-3D, a model was developed for two of the heat exchanger options a printed-circuit heat exchanger and a helical-coil steam generator. The RELAP5-3D models were used to simulate an exponential decrease in pressure over a 20 second period. The results of this loss of coolant analysis indicate thatmore » heat is initially transferred from the primary loop to the secondary loop, but after the decrease in pressure in the primary loop the heat is transferred from the secondary loop to the primary loop. A high-temperature gas reactor model should be developed and connected to the heat transfer component to simulate other transients.« less
Development of a 3D log sawing optimization system for small sawmills in central Appalachia, US
Wenshu Lin; Jingxin Wang; Edward Thomas
2011-01-01
A 3D log sawing optimization system was developed to perform log generation, opening face determination, sawing simulation, and lumber grading using 3D modeling techniques. Heuristic and dynamic programming algorithms were used to determine opening face and grade sawing optimization. Positions and shapes of internal log defects were predicted using a model developed by...
3D laser scanning and modelling of the Dhow heritage for the Qatar National Museum
NASA Astrophysics Data System (ADS)
Wetherelt, A.; Cooper, J. P.; Zazzaro, C.
2014-08-01
Curating boats can be difficult. They are complex structures, often demanding to conserve whether in or out of the water; they are usually large, difficult to move on land, and demanding of gallery space. Communicating life on board to a visiting public in the terra firma context of a museum can be difficult. Boats in their native environment are inherently dynamic artifacts. In a museum they can be static and divorced from the maritime context that might inspire engagement. New technologies offer new approaches to these problems. 3D laser scanning and digital modeling offers museums a multifaceted means of recording, monitoring, studying and communicating watercraft in their care. In this paper we describe the application of 3D laser scanning and subsequent digital modeling. Laser scans were further developed using computer-generated imagery (CGI) modeling techniques to produce photorealistic 3D digital models for development into interactive, media-based museum displays. The scans were also used to generate 2D naval lines and orthographic drawings as a lasting curatorial record of the dhows held by the National Museum of Qatar.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
NASA Astrophysics Data System (ADS)
Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric
2018-05-01
Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard, M.A.; Sommer, S.C.
1995-04-01
AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.
Peterson, Lenna X.; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke
2016-01-01
We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues’ spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, i.e. whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. PMID:27654025
Getting in touch--3D printing in forensic imaging.
Ebert, Lars Chr; Thali, Michael J; Ross, Steffen
2011-09-10
With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
IRFK2D: a computer program for simulating intrinsic random functions of order k
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2003-07-01
IRFK2D is an ANSI Fortran-77 program that generates realizations of an intrinsic function of order k (with k equal to 0, 1 or 2) with a permissible polynomial generalized covariance model. The realizations may be non-conditional or conditioned to the experimental data. The turning bands method is used to generate realizations in 2D and 3D from simulations of an intrinsic random function of order k along lines that span the 2D or 3D space. The program generates two output files, the first containing the simulated values and the second containing the theoretical generalized variogram for different directions together with the theoretical model. The experimental variogram is calculated from the simulated values while the theoretical variogram is the specified generalized covariance model. The generalized variogram is used to assess the quality of the simulation as measured by the extent to which the generalized covariance is reproduced by the simulation. The examples given in this paper indicate that IRFK2D is an efficient implementation of the methodology.
Mittag, U.; Kriechbaumer, A.; Rittweger, J.
2017-01-01
The authors propose a new 3D interpolation algorithm for the generation of digital geometric 3D-models of bones from existing image stacks obtained by peripheral Quantitative Computed Tomography (pQCT) or Magnetic Resonance Imaging (MRI). The technique is based on the interpolation of radial gray value profiles of the pQCT cross sections. The method has been validated by using an ex-vivo human tibia and by comparing interpolated pQCT images with images from scans taken at the same position. A diversity index of <0.4 (1 meaning maximal diversity) even for the structurally complex region of the epiphysis, along with the good agreement of mineral-density-weighted cross-sectional moment of inertia (CSMI), demonstrate the high quality of our interpolation approach. Thus the authors demonstrate that this interpolation scheme can substantially improve the generation of 3D models from sparse scan sets, not only with respect to the outer shape but also with respect to the internal gray-value derived material property distribution. PMID:28574415
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula
2016-05-01
This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.
3D abnormal behavior recognition in power generation
NASA Astrophysics Data System (ADS)
Wei, Zhenhua; Li, Xuesen; Su, Jie; Lin, Jie
2011-06-01
So far most research of human behavior recognition focus on simple individual behavior, such as wave, crouch, jump and bend. This paper will focus on abnormal behavior with objects carrying in power generation. Such as using mobile communication device in main control room, taking helmet off during working and lying down in high place. Taking account of the color and shape are fixed, we adopted edge detecting by color tracking to recognize object in worker. This paper introduces a method, which using geometric character of skeleton and its angle to express sequence of three-dimensional human behavior data. Then adopting Semi-join critical step Hidden Markov Model, weighing probability of critical steps' output to reduce the computational complexity. Training model for every behavior, mean while select some skeleton frames from 3D behavior sample to form a critical step set. This set is a bridge linking 2D observation behavior with 3D human joints feature. The 3D reconstruction is not required during the 2D behavior recognition phase. In the beginning of recognition progress, finding the best match for every frame of 2D observed sample in 3D skeleton set. After that, 2D observed skeleton frames sample will be identified as a specifically 3D behavior by behavior-classifier. The effectiveness of the proposed algorithm is demonstrated with experiments in similar power generation environment.
3D printing from cardiovascular CT: a practical guide and review
Birbara, Nicolette S.; Hussain, Tarique; Greil, Gerald; Foley, Thomas A.; Pather, Nalini
2017-01-01
Current cardiovascular imaging techniques allow anatomical relationships and pathological conditions to be captured in three dimensions. Three-dimensional (3D) printing, or rapid prototyping, has also become readily available and made it possible to transform virtual reconstructions into physical 3D models. This technology has been utilised to demonstrate cardiovascular anatomy and disease in clinical, research and educational settings. In particular, 3D models have been generated from cardiovascular computed tomography (CT) imaging data for purposes such as surgical planning and teaching. This review summarises applications, limitations and practical steps required to create a 3D printed model from cardiovascular CT. PMID:29255693
Finite-element 3D simulation tools for high-current relativistic electron beams
NASA Astrophysics Data System (ADS)
Humphries, Stanley; Ekdahl, Carl
2002-08-01
The DARHT second-axis injector is a challenge for computer simulations. Electrons are subject to strong beam-generated forces. The fields are fully three-dimensional and accurate calculations at surfaces are critical. We describe methods applied in OmniTrak, a 3D finite-element code suite that can address DARHT and the full range of charged-particle devices. The system handles mesh generation, electrostatics, magnetostatics and self-consistent particle orbits. The MetaMesh program generates meshes of conformal hexahedrons to fit any user geometry. The code has the unique ability to create structured conformal meshes with cubic logic. Organized meshes offer advantages in speed and memory utilization in the orbit and field solutions. OmniTrak is a versatile charged-particle code that handles 3D electric and magnetic field solutions on independent meshes. The program can update both 3D field solutions from the calculated beam space-charge and current-density. We shall describe numerical methods for orbit tracking on a hexahedron mesh. Topics include: 1) identification of elements along the particle trajectory, 2) fast searches and adaptive field calculations, 3) interpolation methods to terminate orbits on material surfaces, 4) automatic particle generation on multiple emission surfaces to model space-charge-limited emission and field emission, 5) flexible Child law algorithms, 6) implementation of the dual potential model for 3D magnetostatics, and 7) assignment of charge and current from model particle orbits for self-consistent fields.
High Accuracy 3D Processing of Satellite Imagery
NASA Technical Reports Server (NTRS)
Gruen, A.; Zhang, L.; Kocaman, S.
2007-01-01
Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.
Atalay, Hasan Anıl; Ülker, Volkan; Alkan, İlter; Canat, Halil Lütfi; Özkuvancı, Ünsal; Altunrende, Fatih
2016-10-01
To investigate the impact of three-dimensional (3D) printed pelvicaliceal system models on residents' understanding of pelvicaliceal system anatomy before percutaneous nephrolithotripsy (PCNL). Patients with unilateral complex renal stones indicating PCNL were selected. Usable data of patients were obtained from CT-scans in Digital Imaging and Communications in Medicine (DICOM) format. Mimics software version 16.0 (Materialise, Belgium) was used for segmentation and extraction of pelvicaliceal systems (PCSs). All DICOM-formatted files were converted to the stereolithography file format. Finally, fused deposition modeling was used to create plasticine 3D models of PCSs. A questionnaire was designed so that residents could assess the 3D models' effects on their understanding of the anatomy of the pelvicaliceal system before PCNL (Fig. 3). Five patients' anatomically accurate models of the human renal collecting system were effectively generated (Figs. 1 and 2). After presentation of the 3D models, residents were 86% and 88% better at determining the number of anterior and posterior calices, respectively, 60% better at understanding stone location, and 64% better at determining optimal entry calix into the collecting system (Fig. 5). Generating kidney models of PCSs using 3D printing technology is feasible, and the models were accepted by residents as aids in surgical planning and understanding of pelvicaliceal system anatomy before PCNL.
NASA Astrophysics Data System (ADS)
Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres
2006-11-01
Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.
Bas-Relief Modeling from Normal Images with Intuitive Styles.
Ji, Zhongping; Ma, Weiyin; Sun, Xianfang
2014-05-01
Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.
The NIH 3D Print Exchange: A Public Resource for Bioscientific and Biomedical 3D Prints.
Coakley, Meghan F; Hurt, Darrell E; Weber, Nick; Mtingwa, Makazi; Fincher, Erin C; Alekseyev, Vsevelod; Chen, David T; Yun, Alvin; Gizaw, Metasebia; Swan, Jeremy; Yoo, Terry S; Huyen, Yentram
2014-09-01
The National Institutes of Health (NIH) has launched the NIH 3D Print Exchange, an online portal for discovering and creating bioscientifically relevant 3D models suitable for 3D printing, to provide both researchers and educators with a trusted source to discover accurate and informative models. There are a number of online resources for 3D prints, but there is a paucity of scientific models, and the expertise required to generate and validate such models remains a barrier. The NIH 3D Print Exchange fills this gap by providing novel, web-based tools that empower users with the ability to create ready-to-print 3D files from molecular structure data, microscopy image stacks, and computed tomography scan data. The NIH 3D Print Exchange facilitates open data sharing in a community-driven environment, and also includes various interactive features, as well as information and tutorials on 3D modeling software. As the first government-sponsored website dedicated to 3D printing, the NIH 3D Print Exchange is an important step forward to bringing 3D printing to the mainstream for scientific research and education.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
NASA Astrophysics Data System (ADS)
Rodríguez-Ruiz, Alejandro; Agasthya, Greeshma A.; Sechopoulos, Ioannis
2017-09-01
To characterize and develop a patient-based 3D model of the compressed breast undergoing mammography and breast tomosynthesis. During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3D breast surface imaging with structured light (SL) during breast compression, along with simultaneous acquisition of a tomosynthesis image. A pair of SL systems were used to acquire 3D surface images by projecting 24 different patterns onto the compressed breast and capturing their reflection off the breast surface in approximately 12-16 s. The 3D surface was characterized and modeled via principal component analysis. The resulting surface model was combined with a previously developed 2D model of projected compressed breast shapes to generate a full 3D model. Data from ten patients were discarded due to technical problems during image acquisition. The maximum breast thickness (found at the chest-wall) had an average value of 56 mm, and decreased 13% towards the nipple (breast tilt angle of 5.2°). The portion of the breast not in contact with the compression paddle or the support table extended on average 17 mm, 18% of the chest-wall to nipple distance. The outermost point along the breast surface lies below the midline of the total thickness. A complete 3D model of compressed breast shapes was created and implemented as a software application available for download, capable of generating new random realistic 3D shapes of breasts undergoing compression. Accurate characterization and modeling of the breast curvature and shape was achieved and will be used for various image processing and clinical tasks.
3D Digitization and Prototyping of the Skull for Practical Use in the Teaching of Human Anatomy.
Lozano, Maria Teresa Ugidos; Haro, Fernando Blaya; Diaz, Carlos Molino; Manzoor, Sadia; Ugidos, Gonzalo Ferrer; Mendez, Juan Antonio Juanes
2017-05-01
The creation of new rapid prototyping techniques, low cost 3D printers as well as the creation of new software for these techniques have allowed the creation of 3D models of bones making their application possible in the field of teaching anatomy in the faculties of Health Sciences. The 3D model of cranium created in the present work, at full scale, present accurate reliefs and anatomical details that are easily identifiable by undergraduate students in their use for the study of human anatomy. In this article, the process of scanning the skull and the subsequent treatment of these images with specific software until the generation of 3D model using 3D printer has been reported.
Gis-Based Smart Cartography Using 3d Modeling
NASA Astrophysics Data System (ADS)
Malinverni, E. S.; Tassetti, A. N.
2013-08-01
3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.
Automatic Texture Mapping of Architectural and Archaeological 3d Models
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Stallmann, D.
2012-07-01
Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.
Maffioletti, Sara Martina; Sarcar, Shilpita; Henderson, Alexander B H; Mannhardt, Ingra; Pinton, Luca; Moyle, Louise Anne; Steele-Stallard, Heather; Cappellari, Ornella; Wells, Kim E; Ferrari, Giulia; Mitchell, Jamie S; Tyzack, Giulia E; Kotiadis, Vassilios N; Khedr, Moustafa; Ragazzi, Martina; Wang, Weixin; Duchen, Michael R; Patani, Rickie; Zammit, Peter S; Wells, Dominic J; Eschenhagen, Thomas; Tedesco, Francesco Saverio
2018-04-17
Generating human skeletal muscle models is instrumental for investigating muscle pathology and therapy. Here, we report the generation of three-dimensional (3D) artificial skeletal muscle tissue from human pluripotent stem cells, including induced pluripotent stem cells (iPSCs) from patients with Duchenne, limb-girdle, and congenital muscular dystrophies. 3D skeletal myogenic differentiation of pluripotent cells was induced within hydrogels under tension to provide myofiber alignment. Artificial muscles recapitulated characteristics of human skeletal muscle tissue and could be implanted into immunodeficient mice. Pathological cellular hallmarks of incurable forms of severe muscular dystrophy could be modeled with high fidelity using this 3D platform. Finally, we show generation of fully human iPSC-derived, complex, multilineage muscle models containing key isogenic cellular constituents of skeletal muscle, including vascular endothelial cells, pericytes, and motor neurons. These results lay the foundation for a human skeletal muscle organoid-like platform for disease modeling, regenerative medicine, and therapy development. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Chen, H F; Dong, X C; Zen, B S; Gao, K; Yuan, S G; Panaye, A; Doucet, J P; Fan, B T
2003-08-01
An efficient virtual and rational drug design method is presented. It combines virtual bioactive compound generation with 3D-QSAR model and docking. Using this method, it is possible to generate a lot of highly diverse molecules and find virtual active lead compounds. The method was validated by the study of a set of anti-tumor drugs. With the constraints of pharmacophore obtained by DISCO implemented in SYBYL 6.8, 97 virtual bioactive compounds were generated, and their anti-tumor activities were predicted by CoMFA. Eight structures with high activity were selected and screened by the 3D-QSAR model. The most active generated structure was further investigated by modifying its structure in order to increase the activity. A comparative docking study with telomeric receptor was carried out, and the results showed that the generated structures could form more stable complexes with receptor than the reference compound selected from experimental data. This investigation showed that the proposed method was a feasible way for rational drug design with high screening efficiency.
Phase aided 3D imaging and modeling: dedicated systems and case studies
NASA Astrophysics Data System (ADS)
Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang
2014-05-01
Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.
Peterson, Lenna X; Kim, Hyungrae; Esquivel-Rodriguez, Juan; Roy, Amitava; Han, Xusi; Shin, Woong-Hee; Zhang, Jian; Terashi, Genki; Lee, Matt; Kihara, Daisuke
2017-03-01
We report the performance of protein-protein docking predictions by our group for recent rounds of the Critical Assessment of Prediction of Interactions (CAPRI), a community-wide assessment of state-of-the-art docking methods. Our prediction procedure uses a protein-protein docking program named LZerD developed in our group. LZerD represents a protein surface with 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. The appropriate soft representation of protein surface with 3DZD makes the method more tolerant to conformational change of proteins upon docking, which adds an advantage for unbound docking. Docking was guided by interface residue prediction performed with BindML and cons-PPISP as well as literature information when available. The generated docking models were ranked by a combination of scoring functions, including PRESCO, which evaluates the native-likeness of residues' spatial environments in structure models. First, we discuss the overall performance of our group in the CAPRI prediction rounds and investigate the reasons for unsuccessful cases. Then, we examine the performance of several knowledge-based scoring functions and their combinations for ranking docking models. It was found that the quality of a pool of docking models generated by LZerD, that is whether or not the pool includes near-native models, can be predicted by the correlation of multiple scores. Although the current analysis used docking models generated by LZerD, findings on scoring functions are expected to be universally applicable to other docking methods. Proteins 2017; 85:513-527. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei
2017-01-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…
Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models
NASA Astrophysics Data System (ADS)
Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.
2011-09-01
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
3D reconstruction of SEM images by use of optical photogrammetry software.
Eulitz, Mona; Reiss, Gebhard
2015-08-01
Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. Copyright © 2015 Elsevier Inc. All rights reserved.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
NASA Astrophysics Data System (ADS)
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.
3D molecular models of whole HIV-1 virions generated with cellPACK
Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.
2014-01-01
As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262
Physical modeling of 3D and 4D laser imaging
NASA Astrophysics Data System (ADS)
Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard
2010-04-01
Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.
Large-scale building scenes reconstruction from close-range images based on line and plane feature
NASA Astrophysics Data System (ADS)
Ding, Yi; Zhang, Jianqing
2007-11-01
Automatic generate 3D models of buildings and other man-made structures from images has become a topic of increasing importance, those models may be in applications such as virtual reality, entertainment industry and urban planning. In this paper we address the main problems and available solution for the generation of 3D models from terrestrial images. We first generate a coarse planar model of the principal scene planes and then reconstruct windows to refine the building models. There are several points of novelty: first we reconstruct the coarse wire frame model use the line segments matching with epipolar geometry constraint; Secondly, we detect the position of all windows in the image and reconstruct the windows by established corner points correspondences between images, then add the windows to the coarse model to refine the building models. The strategy is illustrated on image triple of college building.
3D Printing of Protein Models in an Undergraduate Laboratory: Leucine Zippers
ERIC Educational Resources Information Center
Meyer, Scott C.
2015-01-01
An upper-division undergraduate laboratory experiment is described that explores the structure/function relationship of protein domains, namely leucine zippers, through a molecular graphics computer program and physical models fabricated by 3D printing. By generating solvent accessible surfaces and color-coding hydrophobic, basic, and acidic amino…
Landslide Spreading, Impulse Water Waves and Modelling of the Vajont Rockslide
NASA Astrophysics Data System (ADS)
Crosta, Giovanni B.; Imposimato, Silvia; Roddeman, Dennis
2016-06-01
Landslides can occur in different environments and can interact with or fall into water reservoirs or open sea with different characteristics. The subaerial evolution and the transition from subaerial to subaqueous conditions can strongly control the landslide evolution and the generated impulse waves, and consequently the final hazard zonation. We intend to model the landslide spreading, the impact with the water surface and the generation of the impulse wave under different 2D and 3D conditions and settings. We verify the capabilities of a fully 2D and 3D FEM ALE approach to model and analyse near-field evolution. To this aim we validate the code against 2D laboratory experiments for different Froude number conditions (Fr = 1.4, 3.2). Then the Vajont rockslide (Fr = 0.26-0.75) and the consequent impulse wave are simulated in 2D and 3D. The sliding mass is simulated as an elasto-plastic Mohr-Coulomb material and the lake water as a fully inviscid low compressibility fluid. The rockslide model is validated against field observations, including the total duration, the profile and internal geometry of the final deposit, the maximum water run-up on the opposite valley flank and on the rockslide mass. 2D models are presented for both the case of a dry valley and that of the impounded lake. The set of fully 3D simulations are the first ones available and considering the rockslide evolution, propagation and interaction with the water reservoir. Advantages and disadvantages of the modelling approach are discussed.
A step-by-step development of real-size chest model for simulation of thoracoscopic surgery.
Morikawa, Toshiaki; Yamashita, Makoto; Odaka, Makoto; Tsukamoto, Yo; Shibasaki, Takamasa; Mori, Shohei; Asano, Hisatoshi; Akiba, Tadashi
2017-08-01
For the purpose of simulating thoracoscopic surgery, we have conducted stepwise development of a life-like chest model including thorax and intrathoracic organs. First, CT data of the human chest were obtained. First-generation model: based on the CT data, each component of the chest was made from a 3D printer. A hard resin was used for the bony thorax and a rubber-like resin for the vessels and bronchi. Lung parenchyma, muscles and skin were not created. Second-generation model: in addition to the 3D printer, a cast moulding method was used. Each part was casted using a 3D printed master and then assembled. The vasculature and bronchi were casted using silicon resin. The lung parenchyma and mediastinum organs were casted using urethane foam. Chest wall and bony thorax were also casted using a silicon resin. Third-generation model: foamed polyvinyl alcohol (PVA) was newly developed and casted onto the lung parenchyma. The vasculature and bronchi were developed using a soft resin. A PVA plate was made as the mediastinum, and all were combined. The first-generation model showed real distribution of the vasculature and bronchi; it enabled an understanding of the anatomy within the lung. The second-generation model is a total chest dry model, which enabled observation of the total anatomy of the organs and thorax. The third-generation model is a wet organ model. It allowed for realistic simulation of surgical procedures, such as cutting, suturing, stapling and energy device use. This single-use model achieved realistic simulation of thoracoscopic surgery. As the generation advances, the model provides a more realistic simulation of thoracoscopic surgery. Further improvement of the model is needed. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
A quantitative evaluation of the three dimensional reconstruction of patients' coronary arteries.
Klein, J L; Hoff, J G; Peifer, J W; Folks, R; Cooke, C D; King, S B; Garcia, E V
1998-04-01
Through extensive training and experience angiographers learn to mentally reconstruct the three dimensional (3D) relationships of the coronary arterial branches. Graphic computer technology can assist angiographers to more quickly visualize the coronary 3D structure from limited initial views and then help to determine additional helpful views by predicting subsequent angiograms before they are obtained. A new computer method for facilitating 3D reconstruction and visualization of human coronary arteries was evaluated by reconstructing biplane left coronary angiograms from 30 patients. The accuracy of the reconstruction was assessed in two ways: 1) by comparing the vessel's centerlines of the actual angiograms with the centerlines of a 2D projection of the 3D model projected into the exact angle of the actual angiogram; and 2) by comparing two 3D models generated by different simultaneous pairs on angiograms. The inter- and intraobserver variability of reconstruction were evaluated by mathematically comparing the 3D model centerlines of repeated reconstructions. The average absolute corrected displacement of 14,662 vessel centerline points in 2D from 30 patients was 1.64 +/- 2.26 mm. The average corrected absolute displacement of 3D models generated from different biplane pairs was 7.08 +/- 3.21 mm. The intraobserver variability of absolute 3D corrected displacement was 5.22 +/- 3.39 mm. The interobserver variability was 6.6 +/- 3.1 mm. The centerline analyses show that the reconstruction algorithm is mathematically accurate and reproducible. The figures presented in this report put these measurement errors into clinical perspective showing that they yield an accurate representation of the clinically relevant information seen on the actual angiograms. These data show that this technique can be clinically useful by accurately displaying in three dimensions the complex relationships of the branches of the coronary arterial tree.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation
NASA Astrophysics Data System (ADS)
Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.
2014-05-01
Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.
MC2-3 / DIF3D Analysis for the ZPPR-15 Doppler and Sodium Void Worth Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Lell, Richard M.; Lee, Changho
This manuscript covers validation efforts for our deterministic codes at Argonne National Laboratory. The experimental results come from the ZPPR-15 work in 1985-1986 which was focused on the accuracy of physics data for the integral fast reactor concept. Results for six loadings are studied in this document and focus on Doppler sample worths and sodium void worths. The ZPPR-15 loadings are modeled using the MC2-3/DIF3D codes developed and maintained at ANL and the MCNP code from LANL. The deterministic models are generated by processing the as-built geometry information, i.e. MCNP input, and generating MC2-3 cross section generation instructions and amore » drawer homogenized equivalence problem. The Doppler reactivity worth measurements are small heated samples which insert very small amounts of reactivity into the system (< 2 pcm). The results generated by the MC2-3/DIF3D codes were excellent for ZPPR-15A and ZPPR-15B and good for ZPPR-15D, compared to the MCNP solutions. In all cases, notable improvements were made over the analysis techniques applied to the same problems in 1987. The sodium void worths from MC2-3/DIF3D were quite good at 37.5 pcm while MCNP result was 33 pcm and the measured result was 31.5 pcm. Copyright © (2015) by the American Nuclear Society All rights reserved.« less
Using 3D modeling techniques to enhance teaching of difficult anatomical concepts
Pujol, Sonia; Baldwin, Michael; Nassiri, Joshua; Kikinis, Ron; Shaffer, Kitt
2016-01-01
Rationale and Objectives Anatomy is an essential component of medical education as it is critical for the accurate diagnosis in organs and human systems. The mental representation of the shape and organization of different anatomical structures is a crucial step in the learning process. The purpose of this pilot study is to demonstrate the feasibility and benefits of developing innovative teaching modules for anatomy education of first-year medical students based on 3D reconstructions from actual patient data. Materials and Methods A total of 196 models of anatomical structures from 16 anonymized CT datasets were generated using the 3D Slicer open-source software platform. The models focused on three anatomical areas: the mediastinum, the upper abdomen and the pelvis. Online optional quizzes were offered to first-year medical students to assess their comprehension in the areas of interest. Specific tasks were designed for students to complete using the 3D models. Results Scores of the quizzes confirmed a lack of understanding of 3D spatial relationships of anatomical structures despite standard instruction including dissection. Written task material and qualitative review by students suggested that interaction with 3D models led to a better understanding of the shape and spatial relationships among structures, and helped illustrate anatomical variations from one body to another. Conclusion The study demonstrates the feasibility of one possible approach to the generation of 3D models of the anatomy from actual patient data. The educational materials developed have the potential to supplement the teaching of complex anatomical regions and help demonstrate the anatomic variation among patients. PMID:26897601
A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction
NASA Astrophysics Data System (ADS)
Rajput, Asif; Funk, Eugen; Börner, Anko; Hellwich, Olaf
2018-07-01
Modern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.
Generation of Functional Thyroid Tissue Using 3D-Based Culture of Embryonic Stem Cells.
Antonica, Francesco; Kasprzyk, Dominika Figini; Schiavo, Andrea Alex; Romitti, Mírian; Costagliola, Sabine
2017-01-01
During the last decade three-dimensional (3D) cultures of pluripotent stem cells have been intensively used to understand morphogenesis and molecular signaling important for the embryonic development of many tissues. In addition, pluripotent stem cells have been shown to be a valid tool for the in vitro modeling of several congenital or chronic human diseases, opening new possibilities to study their physiopathology without using animal models. Even more interestingly, 3D culture has proved to be a powerful and versatile tool to successfully generate functional tissues ex vivo. Using similar approaches, we here describe a protocol for the generation of functional thyroid tissue using mouse embryonic stem cells and give all the details and references for its characterization and analysis both in vitro and in vivo. This model is a valid approach to study the expression and the function of genes involved in the correct morphogenesis of thyroid gland, to elucidate the mechanisms of production and secretion of thyroid hormones and to test anti-thyroid drugs.
NASA Astrophysics Data System (ADS)
Bai, Linge; Widmann, Thomas; Jülicher, Frank; Dahmann, Christian; Breen, David
2013-01-01
Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues' apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.
Defining Simple nD Operations Based on Prosmatic nD Objects
NASA Astrophysics Data System (ADS)
Arroyo Ohori, K.; Ledoux, H.; Stoter, J.
2016-10-01
An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.
Probabilistic #D data fusion for multiresolution surface generation
NASA Technical Reports Server (NTRS)
Manduchi, R.; Johnson, A. E.
2002-01-01
In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722
Computer 3D site model generation based on aerial images
NASA Astrophysics Data System (ADS)
Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.
1997-07-01
The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.
Integration of Point Clouds Dataset from Different Sensors
NASA Astrophysics Data System (ADS)
Abdullah, C. K. A. F. Che Ku; Baharuddin, N. Z. S.; Ariff, M. F. M.; Majid, Z.; Lau, C. L.; Yusoff, A. R.; Idris, K. M.; Aspuri, A.
2017-02-01
Laser Scanner technology become an option in the process of collecting data nowadays. It is composed of Airborne Laser Scanner (ALS) and Terrestrial Laser Scanner (TLS). ALS like Phoenix AL3-32 can provide accurate information from the viewpoint of rooftop while TLS as Leica C10 can provide complete data for building facade. However if both are integrated, it is able to produce more accurate data. The focus of this study is to integrate both types of data acquisition of ALS and TLS and determine the accuracy of the data obtained. The final results acquired will be used to generate models of three-dimensional (3D) buildings. The scope of this study is focusing on data acquisition of UTM Eco-home through laser scanning methods such as ALS which scanning on the roof and the TLS which scanning on building façade. Both device is used to ensure that no part of the building that are not scanned. In data integration process, both are registered by the selected points among the manmade features which are clearly visible in Cyclone 7.3 software. The accuracy of integrated data is determined based on the accuracy assessment which is carried out using man-made registration methods. The result of integration process can achieve below 0.04m. This integrated data then are used to generate a 3D model of UTM Eco-home building using SketchUp software. In conclusion, the combination of the data acquisition integration between ALS and TLS would produce the accurate integrated data and able to use for generate a 3D model of UTM eco-home. For visualization purposes, the 3D building model which generated is prepared in Level of Detail 3 (LOD3) which recommended by City Geographic Mark-Up Language (CityGML).
Meisner, Eric M; Hager, Gregory D; Ishman, Stacey L; Brown, David; Tunkel, David E; Ishii, Masaru
2013-11-01
To evaluate the accuracy of three-dimensional (3D) airway reconstructions obtained using quantitative endoscopy (QE). We developed this novel technique to reconstruct precise 3D representations of airway geometries from endoscopic video streams. This method, based on machine vision methodologies, uses a post-processing step of the standard videos obtained during routine laryngoscopy and bronchoscopy. We hypothesize that this method is precise and will generate assessment of airway size and shape similar to those obtained using computed tomography (CT). This study was approved by the institutional review board (IRB). We analyzed video sequences from pediatric patients receiving rigid bronchoscopy. We generated 3D scaled airway models of the subglottis, trachea, and carina using QE. These models were compared to 3D airway models generated from CT. We used the CT data as the gold standard measure of airway size, and used a mixed linear model to estimate the average error in cross-sectional area and effective diameter for QE. The average error in cross sectional area (area sliced perpendicular to the long axis of the airway) was 7.7 mm(2) (variance 33.447 mm(4)). The average error in effective diameter was 0.38775 mm (variance 2.45 mm(2)), approximately 9% error. Our pilot study suggests that QE can be used to generate precise 3D reconstructions of airways. This technique is atraumatic, does not require ionizing radiation, and integrates easily into standard airway assessment protocols. We conjecture that this technology will be useful for staging airway disease and assessing surgical outcomes. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Creation of a 3D printed temporal bone model from clinical CT data.
Cohen, Joss; Reyes, Samuel A
2015-01-01
Generate and describe the process of creating a 3D printed, rapid prototype temporal bone model from clinical quality CT images. We describe a technique to create an accurate, alterable, and reproducible rapid prototype temporal bone model using freely available software to segment clinical CT data and generate three different 3D models composed of ABS plastic. Each model was evaluated based on the appearance and size of anatomical structures and response to surgical drilling. Mastoid air cells had retained scaffolding material in the initial versions. This required modifying the model to allow drainage of the scaffolding material. External auditory canal dimensions were similar to those measured from the clinical data. Malleus, incus, oval window, round window, promontory, horizontal semicircular canal, and mastoid segment of the facial nerve canal were identified in all models. The stapes was only partially formed in two models and absent in the third. Qualitative feel of the ABS plastic was softer than bone. The pate produced by drilling was similar to bone dust when appropriate irrigation was used. We present a rapid prototype temporal bone model made based on clinical CT data using 3D printing technology. The model can be made quickly and inexpensively enough to have potential applications for educational training. Copyright © 2015 Elsevier Inc. All rights reserved.
Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images
NASA Astrophysics Data System (ADS)
Amami, Amal; Ben Azouz, Zouhour
2013-12-01
Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.
Unstructured 3D Delaunay mesh generation applied to planes, trains and automobiles
NASA Technical Reports Server (NTRS)
Blake, Kenneth R.; Spragle, Gregory S.
1993-01-01
Technical issues associated with domain-tessellation production, including initial boundary node triangulation and volume mesh refinement, are presented for the 'TGrid' 3D Delaunay unstructured grid generation program. The approach employed is noted to be capable of preserving predefined triangular surface facets in the final tessellation. The capabilities of the approach are demonstrated by generating grids about an entire fighter aircraft configuration, a train, and a wind tunnel model of an automobile.
Astashkina, Anna; Grainger, David W
2014-04-01
Drug failure due to toxicity indicators remains among the primary reasons for staggering drug attrition rates during clinical studies and post-marketing surveillance. Broader validation and use of next-generation 3-D improved cell culture models are expected to improve predictive power and effectiveness of drug toxicological predictions. However, after decades of promising research significant gaps remain in our collective ability to extract quality human toxicity information from in vitro data using 3-D cell and tissue models. Issues, challenges and future directions for the field to improve drug assay predictive power and reliability of 3-D models are reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
Using Openstreetmap Data to Generate Building Models with Their Inner Structures for 3d Maps
NASA Astrophysics Data System (ADS)
Wang, Z.; Zipf, A.
2017-09-01
With the development of Web 2.0, more and more data related to indoor environments has been collected within the volunteered geographic information (VGI) framework, which creates a need for construction of indoor environments from VGI. In this study, we focus on generating 3D building models from OpenStreetMap (OSM) data, and provide an approach to support construction and visualization of indoor environments on 3D maps. In this paper, we present an algorithm which can extract building information from OSM data, and can construct building structures as well as inner building components (e.g., doors, rooms, and windows). A web application is built to support the processing and visualization of the building models on a 3D map. We test our approach with an indoor dataset collected from the field. The results show the feasibility of our approach and its potentials to provide support for a wide range of applications, such as indoor and outdoor navigation, urban planning, and incident management.
Villa, C; Olsen, K B; Hansen, S H
2017-09-01
Post-mortem CT scanning (PMCT) has been introduced at several forensic medical institutions many years ago and has proved to be a useful tool. 3D models of bones, skin, internal organs and bullet paths can rapidly be generated using post-processing software. These 3D models reflect the individual physiognomics and can be used to create whole-body 3D virtual animations. In such way, virtual reconstructions of the probable ante-mortem postures of victims can be constructed and contribute to understand the sequence of events. This procedure is demonstrated in two victims of gunshot injuries. Case #1 was a man showing three perforating gunshot wounds, who died due to the injuries of the incident. Whole-body PMCT was performed and 3D reconstructions of bones, relevant internal organs and bullet paths were generated. Using 3ds Max software and a human anatomy 3D model, a virtual animated body was built and probable ante-mortem postures visualized. Case #2 was a man presenting three perforating gunshot wounds, who survived the incident: one in the left arm and two in the thorax. Only CT scans of the thorax, abdomen and the injured arm were provided by the hospital. Therefore, a whole-body 3D model reflecting the anatomical proportions of the patient was made combining the actual bones of the victim with those obtained from the human anatomy 3D model. The resulted 3D model was used for the animation process. Several probable postures were also visualized in this case. It has be shown that in Case #1 the lesions and the bullet path were not consistent with an upright standing position; instead, the victim was slightly bent forward, i.e. he was sitting or running when he was shot. In Case #2, one of the bullets could have passed through the arm and continued into the thorax. In conclusion, specialized 3D modelling and animation techniques allow for the reconstruction of ante-mortem postures based on both PMCT and clinical CT. Copyright © 2017 Elsevier B.V. All rights reserved.
Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.; ...
2016-04-27
In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.
In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less
NASA Astrophysics Data System (ADS)
Kuchler, Klaus; Westhoff, Daniel; Feinauer, Julian; Mitsch, Tim; Manke, Ingo; Schmidt, Volker
2018-04-01
It is well-known that the microstructure of electrodes in lithium-ion batteries strongly affects their performance. Vice versa, the microstructure can exhibit strong changes during the usage of the battery due to aging effects. For a better understanding of these effects, mathematical analysis and modeling has turned out to be of great help. In particular, stochastic 3D microstructure models have proven to be a powerful and very flexible tool to generate various kinds of particle-based structures. Recently, such models have been proposed for the microstructure of anodes in lithium-ion energy and power cells. In the present paper, we describe a stochastic modeling approach for the 3D microstructure of cathodes in a lithium-ion energy cell, which differs significantly from the one observed in anodes. The model for the cathode data enhances the ideas of the anode models, which have been developed so far. It is calibrated using 3D tomographic image data from pristine as well as two aged cathodes. A validation based on morphological image characteristics shows that the model is able to realistically describe both, the microstructure of pristine and aged cathodes. Thus, we conclude that the model is suitable to generate virtual, but realistic microstructures of lithium-ion cathodes.
NASA Astrophysics Data System (ADS)
Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.
2016-12-01
Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.
2011-08-01
generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human
1983-01-01
Influence Scaling of 2D and 3D Shock/Turbulent ioundary Layer Interactions at Compression Corners." AIM Paper 81-334, January 1981. 5. Kubota, H...generating 3D shock wave/boundary layer interactions 2 Unswept sharp fin interaction and coordinate system 3 Cobra probe measurements of Peake (4) at Mach 4...were made by two Druck 50 PSI transducers, each in- stalled in a computer-controlled 48-port Model 48J4 Scani- valve and referenced to vacuum. A 250
Deep neural network using color and synthesized three-dimensional shape for face recognition
NASA Astrophysics Data System (ADS)
Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun
2017-03-01
We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.
de Hoogt, Ronald; Estrada, Marta F; Vidic, Suzana; Davies, Emma J; Osswald, Annika; Barbier, Michael; Santo, Vítor E; Gjerde, Kjersti; van Zoggel, Hanneke J A A; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M; Verschuren, Emmy W; Hickman, John; Graeser, Ralph
2017-11-21
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented.
de Hoogt, Ronald; Estrada, Marta F.; Vidic, Suzana; Davies, Emma J.; Osswald, Annika; Barbier, Michael; Santo, Vítor E.; Gjerde, Kjersti; van Zoggel, Hanneke J. A. A.; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M.; Verschuren, Emmy W.; Hickman, John; Graeser, Ralph
2017-01-01
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented. PMID:29160867
Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.
Fang, Cheng; Xiao, Zhiyan
2016-01-01
Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.
US National Large-scale City Orthoimage Standard Initiative
Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.
2003-01-01
The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.
NASA Astrophysics Data System (ADS)
Xu, Yi; Rose, Kenneth A.; Chai, Fei; Chavez, Francisco P.; Ayón, Patricia
2015-11-01
We used a 3-dimensional individual-based model (3-D IBM) of Peruvian anchovy to examine how spatial variation in environmental conditions affects larval and juvenile growth and survival, and recruitment. Temperature, velocity, and phytoplankton and zooplankton concentrations generated from a coupled hydrodynamic Nutrients-Phytoplankton-Zooplankton-Detritus (NPZD) model, mapped to a three dimensional rectangular grid, were used to simulate anchovy populations. The IBM simulated individuals as they progressed from eggs to recruitment at 10 cm. Eggs and yolk-sac larvae were followed hourly through the processes of development, mortality, and movement (advection), and larvae and juveniles were followed daily through the processes of growth, mortality, and movement (advection plus behavior). A bioenergetics model was used to grow larvae and juveniles. The NPZD model provided prey fields which influence both food consumption rate as well as behavior mediated movement with individuals going to grids cells having optimal growth conditions. We compared predicted recruitment for monthly cohorts for 1990 through 2004 between the full 3-D IBM and a point (0-D) model that used spatially-averaged environmental conditions. The 3-D and 0-D versions generated similar interannual patterns in monthly recruitment for 1991-2004, with the 3-D results yielding consistently higher survivorship. Both versions successfully captured the very poor recruitment during the 1997-1998 El Niño event. Higher recruitment in the 3-D simulations was due to higher survival during the larval stage resulting from individuals searching for more favorable temperatures that lead to faster growth rates. The strong effect of temperature was because both model versions provided saturating food conditions for larval and juvenile anchovies. We conclude with a discussion of how explicit treatment of spatial variation affected simulated recruitment, other examples of fisheries modeling analyses that have used a similar approach to assess the influence of spatial variation, and areas for further model development.
3D texture analysis for classification of second harmonic generation images of human ovarian cancer
NASA Astrophysics Data System (ADS)
Wen, Bruce; Campbell, Kirby R.; Tilbury, Karissa; Nadiarnykh, Oleg; Brewer, Molly A.; Patankar, Manish; Singh, Vikas; Eliceiri, Kevin. W.; Campagnola, Paul J.
2016-10-01
Remodeling of the collagen architecture in the extracellular matrix (ECM) has been implicated in ovarian cancer. To quantify these alterations we implemented a form of 3D texture analysis to delineate the fibrillar morphology observed in 3D Second Harmonic Generation (SHG) microscopy image data of normal (1) and high risk (2) ovarian stroma, benign ovarian tumors (3), low grade (4) and high grade (5) serous tumors, and endometrioid tumors (6). We developed a tailored set of 3D filters which extract textural features in the 3D image sets to build (or learn) statistical models of each tissue class. By applying k-nearest neighbor classification using these learned models, we achieved 83-91% accuracies for the six classes. The 3D method outperformed the analogous 2D classification on the same tissues, where we suggest this is due the increased information content. This classification based on ECM structural changes will complement conventional classification based on genetic profiles and can serve as an additional biomarker. Moreover, the texture analysis algorithm is quite general, as it does not rely on single morphological metrics such as fiber alignment, length, and width but their combined convolution with a customizable basis set.
Borrel, Alexandre; Fourches, Denis
2017-12-01
There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Sander, Ian M; McGoldrick, Matthew T; Helms, My N; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W Matthew
2017-07-01
Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing has the potential to advance learning, many academic programs have been slow to adopt its use in the classroom despite increased availability of the equipment and digital databases already established for educational use. Herein, a protocol is reported for the production of enlarged bone core and accurate representation of human sinus passages in a 3D printed format using entirely consumer-grade printers and a combination of free-software platforms. The comparative resolutions of three surface rendering programs were also determined using the sinuses, a human body, and a human wrist data files to compare the abilities of different software available for surface map generation of biomedical data. Data shows that 3D Slicer provided highest compatibility and surface resolution for anatomical 3D printing. Generated surface maps were then 3D printed via fused deposition modeling (FDM printing). In conclusion, a methodological approach that explains the production of anatomical models using entirely consumer-grade, fused deposition modeling machines, and a combination of free software platforms is presented in this report. The methods outlined will facilitate the incorporation of 3D printed anatomical models in the classroom. Anat Sci Educ 10: 383-391. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.
Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva
2014-08-01
The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Symbolic modeling of human anatomy for visualization and simulation
NASA Astrophysics Data System (ADS)
Pommert, Andreas; Schubert, Rainer; Riemer, Martin; Schiemann, Thomas; Tiede, Ulf; Hoehne, Karl H.
1994-09-01
Visualization of human anatomy in a 3D atlas requires both spatial and more abstract symbolic knowledge. Within our 'intelligent volume' model which integrates these two levels, we developed and implemented a semantic network model for describing human anatomy. Concepts for structuring (abstraction levels, domains, views, generic and case-specific modeling, inheritance) are introduced. Model, tools for generation and exploration and applications in our 3D anatomical atlas are presented and discussed.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Space Partitioning for Privacy Enabled 3D City Models
NASA Astrophysics Data System (ADS)
Filippovska, Y.; Wichmann, A.; Kada, M.
2016-10-01
Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Reed, John A.; Ryder, Robert; Veres, Joseph P.
2004-01-01
A Zero-D cycle simulation of the GE90-94B high bypass turbofan engine has been achieved utilizing mini-maps generated from a high-fidelity simulation. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled 3D computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the 3D component models are integrated into the cycle model via partial performance maps generated from the CFD flow solutions using one-dimensional mean line turbomachinery programs. This paper highlights the generation of the high-pressure compressor, booster, and fan partial performance maps, as well as turbine maps for the high pressure and low pressure turbine. These are actually "mini-maps" in the sense that they are developed only for a narrow operating range of the component. Results are compared between actual cycle data at a take-off condition and the comparable condition utilizing these mini-maps. The mini-maps are also presented with comparison to actual component data where possible.
Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.
2017-08-23
Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.
Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less
Scaffold Free Bio-orthogonal Assembly of 3-Dimensional Cardiac Tissue via Cell Surface Engineering
NASA Astrophysics Data System (ADS)
Rogozhnikov, Dmitry; O'Brien, Paul J.; Elahipanah, Sina; Yousaf, Muhammad N.
2016-12-01
There has been tremendous interest in constructing in vitro cardiac tissue for a range of fundamental studies of cardiac development and disease and as a commercial system to evaluate therapeutic drug discovery prioritization and toxicity. Although there has been progress towards studying 2-dimensional cardiac function in vitro, there remain challenging obstacles to generate rapid and efficient scaffold-free 3-dimensional multiple cell type co-culture cardiac tissue models. Herein, we develop a programmed rapid self-assembly strategy to induce specific and stable cell-cell contacts among multiple cell types found in heart tissue to generate 3D tissues through cell-surface engineering based on liposome delivery and fusion to display bio-orthogonal functional groups from cell membranes. We generate, for the first time, a scaffold free and stable self assembled 3 cell line co-culture 3D cardiac tissue model by assembling cardiomyocytes, endothelial cells and cardiac fibroblast cells via a rapid inter-cell click ligation process. We compare and analyze the function of the 3D cardiac tissue chips with 2D co-culture monolayers by assessing cardiac specific markers, electromechanical cell coupling, beating rates and evaluating drug toxicity.
Improving 3D Genome Reconstructions Using Orthologous and Functional Constraints
Diament, Alon; Tuller, Tamir
2015-01-01
The study of the 3D architecture of chromosomes has been advancing rapidly in recent years. While a number of methods for 3D reconstruction of genomic models based on Hi-C data were proposed, most of the analyses in the field have been performed on different 3D representation forms (such as graphs). Here, we reproduce most of the previous results on the 3D genomic organization of the eukaryote Saccharomyces cerevisiae using analysis of 3D reconstructions. We show that many of these results can be reproduced in sparse reconstructions, generated from a small fraction of the experimental data (5% of the data), and study the properties of such models. Finally, we propose for the first time a novel approach for improving the accuracy of 3D reconstructions by introducing additional predicted physical interactions to the model, based on orthologous interactions in an evolutionary-related organism and based on predicted functional interactions between genes. We demonstrate that this approach indeed leads to the reconstruction of improved models. PMID:26000633
A new method to generate large order low temperature expansions for discrete spin models
NASA Astrophysics Data System (ADS)
Bhanot, Gyan
1993-03-01
I describe work done in collaboration with Michael Creutz at BNL and Jan Lacki at IAS Princeton. We have developed a method to generate very high order low temperature (weak coupling) expansions for discrete spin systems. For the 3-d and 4-d Ising model, we give results for the low temperature expansion of the average free energy to 50 and 44 excited bonds respectively.
Projecting 2D gene expression data into 3D and 4D space.
Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D
2007-04-01
Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.
Gas Hydrate Petroleum System Modeling in western Nankai Trough Area
NASA Astrophysics Data System (ADS)
Tanaka, M.; Aung, T. T.; Fujii, T.; Wada, N.; Komatsu, Y.
2017-12-01
Since 2003, we have been conducting Gas Hydrate (GH) petroleum system models covering the eastern Nankai Trough, Japan, and results of resource potential from regional model shows good match with the value depicted from seismic and log data. In this year, we have applied this method to explore GH potential in study area. In our study area, GH prospects have been identified with aid of bottom simulating reflector (BSR) and presence of high velocity anomalies above the BSR interpreted based on 3D migration seismic and high density velocity cubes. In order to understand the pathway of biogenic methane from source to GH prospects 1D-2D-3D GH petroleum system models are built and investigated. This study comprises lower Miocene to Pleistocene, deep to shallow marine sedimentary successions of Pliocene and Pleistocene layers overlain the basement. The BSR were interpreted in Pliocene and Pleistocene layers. Based on 6 interpreted sequence boundaries from 3D migration seismic and velocity data, construction of a depth 3D framework model is made and distributed by a conceptual submarine fan depositional facies model derived from seismic facies analysis and referring existing geological report. 1D models are created to analyze lithology sensitivity to temperature and vitrinite data from an exploratory well drilled in the vicinity of study area. The PSM parameters are applied in 2D and 3D modeling and simulation. Existing report of the explanatory well reveals that thermogenic origin are considered to exist. For this reason, simulation scenarios including source formations for both biogenic and thermogenic reaction models are also investigated. Simulation results reveal lower boundary of GH saturation zone at pseudo wells has been simulated with sensitivity of a few tens of meters in comparing with interpreted BSR. From sensitivity analysis, simulated temperature was controlled by different peak generation temperature models and geochemical parameters. Progressive folding and updipping layers including paleostructure can effectively assist biogenic gas migration to upward. Biogenic and Thermogenic mixing model shows that kitchen center only has a potential for generating thermogenic hydrocarbon. Our Prospect based on seismic interpretation is consistent with high GH saturation area based on 3D modeling results.
Seismic modeling of Earth's 3D structure: Recent advancements
NASA Astrophysics Data System (ADS)
Ritsema, J.
2008-12-01
Global models of Earth's seismic structure continue to improve due to the growth of seismic data sets, implementation of advanced wave propagations theories, and increased computational power. In my presentation, I will summarize seismic tomography results from the past 5-10 years. I will compare the most recent P and S velocity models, discuss model resolution and model interpretation, and present an, admittedly biased, list of research directions required to develop the next generation 3D models.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Evaluation of 3D-Jury on CASP7 models.
Kaján, László; Rychlewski, Leszek
2007-08-21
3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.
2013-01-01
Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.
Petroleum system modeling capabilities for use in oil and gas resource assessments
Higley, Debra K.; Lewan, Michael; Roberts, Laura N.R.; Henry, Mitchell E.
2006-01-01
Summary: Petroleum resource assessments are among the most highly visible and frequently cited scientific products of the U.S. Geological Survey. The assessments integrate diverse and extensive information on the geologic, geochemical, and petroleum production histories of provinces and regions of the United States and the World. Petroleum systems modeling incorporates these geoscience data in ways that strengthen the assessment process and results are presented visually and numerically. The purpose of this report is to outline the requirements, advantages, and limitations of one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) petroleum systems modeling that can be applied to the assessment of oil and gas resources. Primary focus is on the application of the Integrated Exploration Systems (IES) PetroMod? software because of familiarity with that program as well as the emphasis by the USGS Energy Program on standardizing to one modeling application. The Western Canada Sedimentary Basin (WCSB) is used to demonstrate the use of the PetroMod? software. Petroleum systems modeling quantitatively extends the 'total petroleum systems' (TPS) concept (Magoon and Dow, 1994; Magoon and Schmoker, 2000) that is employed in USGS resource assessments. Modeling allows integration of state-of-the-art analysis techniques, and provides the means to test and refine understanding of oil and gas generation, migration, and accumulation. Results of modeling are presented visually, numerically, and statistically, which enhances interpretation of the processes that affect TPSs through time. Modeling also provides a framework for the input and processing of many kinds of data essential in resource assessment, including (1) petroleum system elements such as reservoir, seal, and source rock intervals; (2) timing of depositional, hiatus, and erosional events and their influences on petroleum systems; (3) incorporation of vertical and lateral distribution and lithologies of strata that compose the petroleum systems; and (4) calculations of pressure-volume-temperature (PVT) histories. As digital data on petroleum systems continue to expand, the models can integrate these data into USGS resource assessments by building and displaying, through time, areas of petroleum generation, migration pathways, accumulations, and relative contributions of source rocks to the hydrocarbon components. IES PetroMod? 1-D, 2-D, and 3-D models are integrated such that each uses the same variables for petroleum systems modeling. 1-D burial history models are point locations, mainly wells. Maps and cross-sections model geologic information in two dimensions and can incorporate direct input of 2-D seismic data and interpretations using various formats. Both 1-D and 2-D models use data essential for assessments and, following data compilation, they can be completed in hours and retested in minutes. Such models should be built early in the geologic assessment process, inasmuch as they incorporate the petroleum system elements of reservoir, source, and seal rock intervals with associated lithologies and depositional and erosional ages. The models can be used to delineate the petroleum systems. A number of 1-D and 2-D models can be constructed across a geologic province and used by the assessment geologists as a 3-D framework of processes that control petroleum generation, migration, and accumulation. The primary limitation of these models is that they only represent generation, migration, and accumulation in two dimensions. 3-D models are generally built at reservoir to basin scales. They provide a much more detailed and realistic representation of petroleum systems than 1-D or 2-D models because they portray more fully the temporal and physical relations among (1) burial history; (2) lithologies and associated changes through burial in porosity, permeability, and compaction; (3) hydrodynamic effects; and (4) other parameters that influence petroleum gen
3 d printing of 2 d N=(0,2) gauge theories
NASA Astrophysics Data System (ADS)
Franco, Sebastián; Hasan, Azeem
2018-05-01
We introduce 3 d printing, a new algorithm for generating 2 d N=(0,2) gauge theories on D1-branes probing singular toric Calabi-Yau 4-folds using 4 d N=1 gauge theories on D3-branes probing toric Calabi-Yau 3-folds as starting points. Equivalently, this method produces brane brick models starting from brane tilings. 3 d printing represents a significant improvement with respect to previously available tools, allowing a straightforward determination of gauge theories for geometries that until now could only be tackled using partial resolution. We investigate the interplay between triality, an IR equivalence between different 2 d N=(0,2) gauge theories, and the freedom in 3 d printing given an underlying Calabi-Yau 4-fold. Finally, we present the first discussion of the consistency and reduction of brane brick models.
TouchTerrain: A simple web-tool for creating 3D-printable topographic models
NASA Astrophysics Data System (ADS)
Hasiuk, Franciszek J.; Harding, Chris; Renner, Alex Raymond; Winer, Eliot
2017-12-01
An open-source web-application, TouchTerrain, was developed to simplify the production of 3D-printable terrain models. Direct Digital Manufacturing (DDM) using 3D Printers can change how geoscientists, students, and stakeholders interact with 3D data, with the potential to improve geoscience communication and environmental literacy. No other manufacturing technology can convert digital data into tangible objects quickly at relatively low cost; however, the expertise necessary to produce a 3D-printed terrain model can be a substantial burden: knowledge of geographical information systems, computer aided design (CAD) software, and 3D printers may all be required. Furthermore, printing models larger than the build volume of a 3D printer can pose further technical hurdles. The TouchTerrain web-application simplifies DDM for elevation data by generating digital 3D models customized for a specific 3D printer's capabilities. The only required user input is the selection of a region-of-interest using the provided web-application with a Google Maps-style interface. Publically available digital elevation data is processed via the Google Earth Engine API. To allow the manufacture of 3D terrain models larger than a 3D printer's build volume the selected area can be split into multiple tiles without third-party software. This application significantly reduces the time and effort required for a non-expert like an educator to obtain 3D terrain models for use in class. The web application is deployed at http://touchterrain.geol.iastate.edu/.
A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor.
Madrigal, Carlos A; Branch, John W; Restrepo, Alejandro; Mery, Domingo
2017-10-02
Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.
A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor
Branch, John W.
2017-01-01
Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%. PMID:28974037
CAD-Based Modeling of Advanced Rotary Wing Structures for Integrated 3-D Aeromechanics Analysis
NASA Astrophysics Data System (ADS)
Staruk, William
This dissertation describes the first comprehensive use of integrated 3-D aeromechanics modeling, defined as the coupling of 3-D solid finite element method (FEM) structural dynamics with 3-D computational fluid dynamics (CFD), for the analysis of a real helicopter rotor. The development of this new methodology (a departure from how rotor aeroelastic analysis has been performed for 40 years), its execution on a real rotor, and the fundamental understanding of aeromechanics gained from it, are the key contributions of this dissertation. This work also presents the first CFD/CSD analysis of a tiltrotor in edgewise flight, revealing many of its unique loading mechanisms. The use of 3-D FEM, integrated with a trim solver and aerodynamics modeling, has the potential to enhance the design of advanced rotors by overcoming fundamental limitations of current generation beam-based analysis tools and offering integrated internal dynamic stress and strain predictions for design. Two primary goals drove this research effort: 1) developing a methodology to create 3-D CAD-based brick finite element models of rotors including multibody joints, controls, and aerodynamic interfaces, and 2) refining X3D, the US Army's next generation rotor structural dynamics solver featuring 3-D FEM within a multibody formulation with integrated aerodynamics, to model a tiltrotor in the edgewise conversion flight regime, which drives critical proprotor structural loads. Prior tiltrotor analysis has primarily focused on hover aerodynamics with rigid blades or forward flight whirl-flutter stability with simplified aerodynamics. The first goal was met with the development of a detailed methodology for generating multibody 3-D structural models, starting from CAD geometry, continuing to higher-order hexahedral finite element meshing, to final assembly of the multibody model by creating joints, assigning material properties, and defining the aerodynamic interface. Several levels of verification and validation were carried out systematically, covering formulation, model accuracy, and accuracy of the physics of the problem and the many complex coupled aeromechanical phenomena that characterize the behavior of a tiltrotor in the conversion corridor. Compatibility of the new structural analysis models with X3D is demonstrated using analytical test cases, including 90° twisted beams and thick composite plates, and a notional bearingless rotor. Prediction of deformations and stresses in composite beams and plates is validated and verified against experimental measurements, theory, and state-of-the-art beam models. The second goal was met through integrated analysis of the Tilt Rotor Aeroacoustic Model (TRAM) proprotor using X3D coupled to Helios--the US Army's next generation CFD framework featuring a high fidelity Reynolds-average Navier-Stokes (RANS) structured/unstructured overset solver--as well as low order aerodynamic models. Although development of CFD was not part of this work, coupling X3D with Helios was, including establishing consistent interface definitions for blade deformations (for CFD mesh motion), aerodynamic interfaces (for loads transfer), and rotor control angles (for trim). It is expected that this method and solver will henceforth be an integral part of the Helios framework, providing an equal fidelity of representation for fluids and structures in the development of future advanced rotor systems. Structural dynamics analysis of the TRAM model show accurate prediction of the lower natural frequencies, demonstrating the ability to model advanced rotors from first principles using 3-D structural dynamics, and a study of how joint properties affect these frequencies reveals how X3D can be used as a detailed design tool. The CFD/CSD analysis reveals accurate prediction of rotor performance and airloads in edgewise flight when compared to wind tunnel test data. Structural blade loads trends are well predicted at low thrust, but a 3/rev component of flap and lag bending moment appearing in test data at high thrust remains a mystery. Efficiently simulating a gimbaled rotor is not trivial; a time-domain method with only a single blade model is proposed and tested. The internal stress in the blade, particularly at its root where the gimbal action has major influence, is carefully examined, revealing complex localized loading patterns.
Foveated model observers to predict human performance in 3D images
NASA Astrophysics Data System (ADS)
Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.
2017-03-01
We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.
D Surface Generation from Aerial Thermal Imagery
NASA Astrophysics Data System (ADS)
Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.
2015-12-01
Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.
D Modelling and Rapid Prototyping for Cardiovascular Surgical Planning - Two Case Studies
NASA Astrophysics Data System (ADS)
Nocerino, E.; Remondino, F.; Uccheddu, F.; Gallo, M.; Gerosa, G.
2016-06-01
In the last years, cardiovascular diagnosis, surgical planning and intervention have taken advantages from 3D modelling and rapid prototyping techniques. The starting data for the whole process is represented by medical imagery, in particular, but not exclusively, computed tomography (CT) or multi-slice CT (MCT) and magnetic resonance imaging (MRI). On the medical imagery, regions of interest, i.e. heart chambers, valves, aorta, coronary vessels, etc., are segmented and converted into 3D models, which can be finally converted in physical replicas through 3D printing procedure. In this work, an overview on modern approaches for automatic and semiautomatic segmentation of medical imagery for 3D surface model generation is provided. The issue of accuracy check of surface models is also addressed, together with the critical aspects of converting digital models into physical replicas through 3D printing techniques. A patient-specific 3D modelling and printing procedure (Figure 1), for surgical planning in case of complex heart diseases was developed. The procedure was applied to two case studies, for which MCT scans of the chest are available. In the article, a detailed description on the implemented patient-specific modelling procedure is provided, along with a general discussion on the potentiality and future developments of personalized 3D modelling and printing for surgical planning and surgeons practice.
Evaluation of 3D-Jury on CASP7 models
Kaján, László; Rychlewski, Leszek
2007-01-01
Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571
Lounnas, Valère; Wedler, Henry B; Newman, Timothy; Schaftenaar, Gijs; Harrison, Jason G; Nepomuceno, Gabriella; Pemberton, Ryan; Tantillo, Dean J; Vriend, Gert
2014-11-01
In molecular sciences, articles tend to revolve around 2D representations of 3D molecules, and sighted scientists often resort to 3D virtual reality software to study these molecules in detail. Blind and visually impaired (BVI) molecular scientists have access to a series of audio devices that can help them read the text in articles and work with computers. Reading articles published in this journal, though, is nearly impossible for them because they need to generate mental 3D images of molecules, but the article-reading software cannot do that for them. We have previously designed AsteriX, a web server that fully automatically decomposes articles, detects 2D plots of low molecular weight molecules, removes meta data and annotations from these plots, and converts them into 3D atomic coordinates. AsteriX-BVI goes one step further and converts the 3D representation into a 3D printable, haptic-enhanced format that includes Braille annotations. These Braille-annotated physical 3D models allow BVI scientists to generate a complete mental model of the molecule. AsteriX-BVI uses Molden to convert the meta data of quantum chemistry experiments into BVI friendly formats so that the entire line of scientific information that sighted people take for granted-from published articles, via printed results of computational chemistry experiments, to 3D models-is now available to BVI scientists too. The possibilities offered by AsteriX-BVI are illustrated by a project on the isomerization of a sterol, executed by the blind co-author of this article (HBW).
Quantification of tumor morphology via 3D histology: application to oral cavity cancers
NASA Astrophysics Data System (ADS)
Doyle, Scott; Brandwein-Gensler, Margaret; Tomaszewski, John
2016-03-01
Traditional histopathology quantifies disease through the study of glass slides, i.e. two-dimensional samples that are representative of the overall process. We hypothesize that 3D reconstruction can enhance our understanding of histopathologic interpretations. To test this hypothesis, we perform a pilot study of the risk model for oral cavity cancer (OCC), which stratifies patients into low-, intermediate-, and high-risk for locoregional disease-free survival. Classification is based on study of hematoxylin and eosin (H and E) stained tissues sampled from the resection specimens. In this model, the Worst Pattern of Invasion (WPOI) is assessed, representing specific architectural features at the interface between cancer and non-cancer tissue. Currently, assessment of WPOI is based on 2D sections of tissue, representing complex 3D structures of tumor growth. We believe that by reconstructing a 3D model of tumor growth and quantifying the tumor-host interface, we can obtain important diagnostic information that is difficult to assess in 2D. Therefore, we introduce a pilot study framework for visualizing tissue architecture and morphology in 3D from serial sections of histopathology. This framework can be used to enhance predictive models for diseases where severity is determined by 3D biological structure. In this work we utilize serial H and E-stained OCC resections obtained from 7 patients exhibiting WPOI-3 (low risk of recurrence) through WPOI-5 (high risk of recurrence). A supervised classifier automatically generates a map of tumor regions on each slide, which are then co-registered using an elastic deformation algorithm. A smooth 3D model of the tumor region is generated from the registered maps, which is suitable for quantitative tumor interface morphology feature extraction. We report our preliminary models created with this system and suggest further enhancements to traditional histology scoring mechanisms that take spatial architecture into consideration.
A Novel Approach For Ankle Foot Orthosis Developed By Three Dimensional Technologies
NASA Astrophysics Data System (ADS)
Belokar, R. M.; Banga, H. K.; Kumar, R.
2017-12-01
This study presents a novel approach for testing mechanical properties of medical orthosis developed by three dimensional (3D) technologies. A hand-held type 3D laser scanner is used for generating 3D mesh geometry directly from patient’s limb. Subsequently 3D printable orthotic design is produced from crude input model by means of Computer Aided Design (CAD) software. Fused Deposition Modelling (FDM) method in Additive Manufacturing (AM) technologies is used to fabricate the 3D printable Ankle Foot Orthosis (AFO) prototype in order to test the mechanical properties on printout. According to test results, printed Acrylonitrile Butadiene Styrene (ABS) AFO prototype has sufficient elasticity modulus and durability for patient-specific medical device manufactured by the 3D technologies.
Charge-spin Transport in Surface-disordered Three-dimensional Topological Insulators
NASA Astrophysics Data System (ADS)
Peng, Xingyue
As one of the most promising candidates for the building block of the novel spintronic circuit, the topological insulator (TI) has attracted world-wide interest of study. Robust topological order protected by time-reversal symmetry (TRS) makes charge transport and spin generation in TIs significantly different from traditional three-dimensional (3D) or two-dimensional (2D) electronic systems. However, to date, charge transport and spin generation in 3D TIs are still primarily modeled as single-surface phenomena, happening independently on top and bottom surfaces. In this dissertation, I will demonstrate via both experimental findings and theoretical modeling that this "single surface'' theory neither correctly describes a realistic 3D TI-based device nor reveals the amazingly distinct physical picture of spin transport dynamics in 3D TIs. Instead, I present a new viewpoint of the spin transport dynamics where the role of the insulating yet topologically non-trivial bulk of a 3D TI becomes explicit. Within this new theory, many mysterious transport and magneto-transport anomalies can be naturally explained. The 3D TI system turns out to be more similar to its low dimensional sibling--2D TI rather than some other systems sharing the Dirac dispersion, such as graphene. This work not only provides valuable fundamental physical insights on charge-spin transport in 3D TIs, but also offers important guidance to the design of 3D TI-based spintronic devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Drzymala, R
Purpose: The purpose of this project was to devise a practical fabrication process for passive scatter proton beam compensation filters (CF) that is competitive in time, cost and effort using 3D printing. Methods: DICOM compensator filter files for a proton beam were generated by our Eclipse (Varian, Inc.) treatment planning system. The compensator thickness specifications were extracted with in-house software written in Matlab (MathWorks, Inc.) code and written to a text file which could be read by the Rhinoceros 5, computer-aided design (CAD) package (Robert McNeel and Associates), which subsequently generated a smoothed model in a STereoLithographic also known asmore » a Standard Tesselation Language file (STL). The model in the STL file was subsequently refined using Netfabb software and then converted to printing instructions using Cura. version 15.02.1. for our 3D printer. The Airwolf3D, model HD2x, fused filament fabrication (FFF) 3D printer (Airwolf3D.com) was used for our fabrication system with a print speed of 150mm per second. It can print in over 22 different plastic filament materials in a build volume of 11” x 8” x 12”. We choose ABS plastic to print the 3D model of the imprint for our CFs. Results: Prints of the CF could be performed at a print speed of 70mm per second. The time to print the 3D topology for the CF for the 14 cm diameter snout of our Mevion 250 proton accelerator was less than 3 hours. The printed model is intended to subsequently be used as a mold to imprint a molten wax cylindrical to form the compensation after cooling. The whole process should be performed for a typical 3 beam treatment plan within a day. Conclusion: Use of 3D printing is practical and can be used to print a 3D model of a CF within a few hours.« less
Gledhill, Karl; Guo, Zongyou; Umegaki-Arao, Noriko; Higgins, Claire A; Itoh, Munenari; Christiano, Angela M
2015-01-01
The current utility of 3D skin equivalents is limited by the fact that existing models fail to recapitulate the cellular complexity of human skin. They often contain few cell types and no appendages, in part because many cells found in the skin are difficult to isolate from intact tissue and cannot be expanded in culture. Induced pluripotent stem cells (iPSCs) present an avenue by which we can overcome this issue due to their ability to be differentiated into multiple cell types in the body and their unlimited growth potential. We previously reported generation of the first human 3D skin equivalents from iPSC-derived fibroblasts and iPSC-derived keratinocytes, demonstrating that iPSCs can provide a foundation for modeling a complex human organ such as skin. Here, we have increased the complexity of this model by including additional iPSC-derived melanocytes. Epidermal melanocytes, which are largely responsible for skin pigmentation, represent the second most numerous cell type found in normal human epidermis and as such represent a logical next addition. We report efficient melanin production from iPSC-derived melanocytes and transfer within an entirely iPSC-derived epidermal-melanin unit and generation of the first functional human 3D skin equivalents made from iPSC-derived fibroblasts, keratinocytes and melanocytes.
Generation Of A Mouse Model For Schwannomatosis
2010-09-01
TITLE: Generation of a Mouse Model for Schwannomatosis PRINCIPAL INVESTIGATOR: Long-Sheng Chang, Ph.D. CONTRACTING ORGANIZATION: The...Annual 3. DATES COVERED (From - To) 1 Sep 2009 - 31 Aug 2010 4. TITLE AND SUBTITLE Generation of a Mouse Model for Schwannomatosis 5a. CONTRACT...hypothesis involving inactivation of both the INI1/SNF5 and NF2 tumor suppressor genes in the formation of schwannomatosis -associated tumors. To
Salazar-Gamarra, Rodrigo; Seelaus, Rosemary; da Silva, Jorge Vicente Lopes; da Silva, Airton Moreira; Dib, Luciano Lauria
2016-05-25
The aim of this study is to present the development of a new technique to obtain 3D models using photogrammetry by a mobile device and free software, as a method for making digital facial impressions of patients with maxillofacial defects for the final purpose of 3D printing of facial prostheses. With the use of a mobile device, free software and a photo capture protocol, 2D captures of the anatomy of a patient with a facial defect were transformed into a 3D model. The resultant digital models were evaluated for visual and technical integrity. The technical process and resultant models were described and analyzed for technical and clinical usability. Generating 3D models to make digital face impressions was possible by the use of photogrammetry with photos taken by a mobile device. The facial anatomy of the patient was reproduced by a *.3dp and a *.stl file with no major irregularities. 3D printing was possible. An alternative method for capturing facial anatomy is possible using a mobile device for the purpose of obtaining and designing 3D models for facial rehabilitation. Further studies must be realized to compare 3D modeling among different techniques and systems. Free software and low cost equipment could be a feasible solution to obtain 3D models for making digital face impressions for maxillofacial prostheses, improving access for clinical centers that do not have high cost technology considered as a prior acquisition.
Progress in the Development of a Global Quasi-3-D Multiscale Modeling Framework
NASA Astrophysics Data System (ADS)
Jung, J.; Konor, C. S.; Randall, D. A.
2017-12-01
The Quasi-3-D Multiscale Modeling Framework (Q3D MMF) is a second-generation MMF, which has following advances over the first-generation MMF: 1) The cloud-resolving models (CRMs) that replace conventional parameterizations are not confined to the large-scale dynamical-core grid cells, and are seamlessly connected to each other, 2) The CRMs sense the three-dimensional large- and cloud-scale environment, 3) Two perpendicular sets of CRM channels are used, and 4) The CRMs can resolve the steep surface topography along the channel direction. The basic design of the Q3D MMF has been developed and successfully tested in a limited-area modeling framework. Currently, global versions of the Q3D MMF are being developed for both weather and climate applications. The dynamical cores governing the large-scale circulation in the global Q3D MMF are selected from two cube-based global atmospheric models. The CRM used in the model is the 3-D nonhydrostatic anelastic Vector-Vorticity Model (VVM), which has been tested with the limited-area version for its suitability for this framework. As a first step of the development, the VVM has been reconstructed on the cubed-sphere grid so that it can be applied to global channel domains and also easily fitted to the large-scale dynamical cores. We have successfully tested the new VVM by advecting a bell-shaped passive tracer and simulating the evolutions of waves resulted from idealized barotropic and baroclinic instabilities. For improvement of the model, we also modified the tracer advection scheme to yield positive-definite results and plan to implement a new physics package that includes a double-moment microphysics and an aerosol physics. The interface for coupling the large-scale dynamical core and the VVM is under development. In this presentation, we shall describe the recent progress in the development and show some test results.
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1993-01-01
A computer program, surf3d, that uses the 3D finite-element method to calculate the stress-intensity factors for surface, corner, and embedded cracks in finite-thickness plates with and without circular holes, was developed. The cracks are assumed to be either elliptic or part eliptic in shape. The computer program uses eight-noded hexahedral elements to model the solid. The program uses a skyline storage and solver. The stress-intensity factors are evaluated using the force method, the crack-opening displacement method, and the 3-D virtual crack closure methods. In the manual the input to and the output of the surf3d program are described. This manual also demonstrates the use of the program and describes the calculation of the stress-intensity factors. Several examples with sample data files are included with the manual. To facilitate modeling of the user's crack configuration and loading, a companion program (a preprocessor program) that generates the data for the surf3d called gensurf was also developed. The gensurf program is a three dimensional mesh generator program that requires minimal input and that builds a complete data file for surf3d. The program surf3d is operational on Unix machines such as CRAY Y-MP, CRAY-2, and Convex C-220.
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A
2017-09-01
Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.
An approach to 3D model fusion in GIS systems and its application in a future ECDIS
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhao, Depeng; Pan, Mingyang
2016-04-01
Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.
Faghih Shojaei, M; Mohammadi, V; Rajabi, H; Darvizeh, A
2012-12-01
In this paper, a new numerical technique is presented to accurately model the geometrical and mechanical features of mollusk shells as a three dimensional (3D) integrated volume. For this purpose, the Newton method is used to solve the nonlinear equations of shell surfaces. The points of intersection on the shell surface are identified and the extra interior parts are removed. Meshing process is accomplished with respect to the coordinate of each point of intersection. The final 3D generated mesh models perfectly describe the spatial configuration of the mollusk shells. Moreover, the computational model perfectly matches with the actual interior geometry of the shells as well as their exterior architecture. The direct generation technique is employed to generate a 3D finite element (FE) model in ANSYS 11. X-ray images are taken to show the close similarity of the interior geometry of the models and the actual samples. A scanning electron microscope (SEM) is used to provide information on the microstructure of the shells. In addition, a set of compression tests were performed on gastropod shell specimens to obtain their ultimate compressive strength. A close agreement between experimental data and the relevant numerical results is demonstrated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spectral self-action of THz emission from ionizing two-color laser pulses in gases
NASA Astrophysics Data System (ADS)
Cabrera-Granado, Eduardo; Chen, Yxing; Babushkin, Ihar; Bergé, Luc; Skupin, Stefan
2015-02-01
The spectrum of terahertz (THz) emission in gases via ionizing two-color femtosecond pulses is analyzed by means of a semi-analytic model and numerical simulations in 1D, 2D and 3D geometries taking into account propagation effects of both pump and THz fields. We show that produced THz signals interact with free electron trajectories and thus significantly influence further THz generation upon propagation, i.e., make the process inherently nonlocal. This self-action contributes to the observed strong spectral broadening of the generated THz field. We show that diffraction of the generated THz radiation is the limiting factor for the co-propagating low frequency amplitudes and thus for the self-action mechanism in 2D and 3D geometries.
Fast Geometric Consensus Approach for Protein Model Quality Assessment
Adamczak, Rafal; Pillardy, Jaroslaw; Vallat, Brinda K.
2011-01-01
Abstract Model quality assessment (MQA) is an integral part of protein structure prediction methods that typically generate multiple candidate models. The challenge lies in ranking and selecting the best models using a variety of physical, knowledge-based, and geometric consensus (GC)-based scoring functions. In particular, 3D-Jury and related GC methods assume that well-predicted (sub-)structures are more likely to occur frequently in a population of candidate models, compared to incorrectly folded fragments. While this approach is very successful in the context of diversified sets of models, identifying similar substructures is computationally expensive since all pairs of models need to be superimposed using MaxSub or related heuristics for structure-to-structure alignment. Here, we consider a fast alternative, in which structural similarity is assessed using 1D profiles, e.g., consisting of relative solvent accessibilities and secondary structures of equivalent amino acid residues in the respective models. We show that the new approach, dubbed 1D-Jury, allows to implicitly compare and rank N models in O(N) time, as opposed to quadratic complexity of 3D-Jury and related clustering-based methods. In addition, 1D-Jury avoids computationally expensive 3D superposition of pairs of models. At the same time, structural similarity scores based on 1D profiles are shown to correlate strongly with those obtained using MaxSub. In terms of the ability to select the best models as top candidates 1D-Jury performs on par with other GC methods. Other potential applications of the new approach, including fast clustering of large numbers of intermediate structures generated by folding simulations, are discussed as well. PMID:21244273
Saliency Detection for Stereoscopic 3D Images in the Quaternion Frequency Domain
NASA Astrophysics Data System (ADS)
Cai, Xingyu; Zhou, Wujie; Cen, Gang; Qiu, Weiwei
2018-06-01
Recent studies have shown that a remarkable distinction exists between human binocular and monocular viewing behaviors. Compared with two-dimensional (2D) saliency detection models, stereoscopic three-dimensional (S3D) image saliency detection is a more challenging task. In this paper, we propose a saliency detection model for S3D images. The final saliency map of this model is constructed from the local quaternion Fourier transform (QFT) sparse feature and global QFT log-Gabor feature. More specifically, the local QFT feature measures the saliency map of an S3D image by analyzing the location of a similar patch. The similar patch is chosen using a sparse representation method. The global saliency map is generated by applying the wake edge-enhanced gradient QFT map through a band-pass filter. The results of experiments on two public datasets show that the proposed model outperforms existing computational saliency models for estimating S3D image saliency.
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
3D Thermal and Mechanical Analysis of a Single Event Burnout
NASA Astrophysics Data System (ADS)
Peretti, Gabriela; Demarco, Gustavo; Romero, Eduardo; Tais, Carlos
2015-08-01
This paper presents a study related to thermal and mechanical behavior of power DMOS transistors during a Single Event Burnout (SEB) process. We use a cylindrical heat generation region for emulating the thermal and mechanical phenomena related to the SEB. In this way, it is avoided the complexity of the mathematical treatment of the ion-device interaction. This work considers locating the heat generation region in positions that are more realistic than the ones used in previous work. For performing the study, we formulate and validate a new 3D model for the transistor that maintains the computational cost at reasonable level. The resulting mathematical models are solved by means of the Finite Element Method. The simulations results show that the failure dynamics is dominated by the mechanical stress in the metal layer. Additionally, the time to failure depends on the heat source position, for a given power and dimension of the generation region. The results suggest that 3D modeling should be considered for a detailed study of thermal and mechanical effects induced by SEBs.
Dynamic three-dimensional model of the coronary circulation
NASA Astrophysics Data System (ADS)
Lehmann, Glen; Gobbi, David G.; Dick, Alexander J.; Starreveld, Yves P.; Quantz, M.; Holdsworth, David W.; Drangova, Maria
2001-05-01
A realistic numerical three-dimensional (3D) model of the dynamics of human coronary arteries has been developed. High- resolution 3D images of the coronary arteries of an excised human heart were obtained using a C-arm based computed tomography (CT) system. Cine bi-plane coronary angiograms were then acquired from a patient with similar coronary anatomy. These angiograms were used to determine the vessel motion, which was applied to the static 3D coronary tree. Corresponding arterial bifurcations were identified in the 3D CT image and in the 2D angiograms. The 3D positions of the angiographic landmarks, which were known throughout the cardiac cycle, were used to warp the 3D image via a non-linear thin-plate spline algorithm. The result was a set or 30 dynamic volumetric images sampling a complete cardiac cycle. To the best of our knowledge, the model presented here is the first dynamic 3D model that provides a true representation of both the geometry and motion of a human coronary artery tree. In the future, similar models can be generated to represent different coronary anatomy and motion. Such models are expected to become an invaluable tool during the development of dynamic imaging techniques such as MRI, multi-slice CT and 3D angiography.
Generation, recognition, and consistent fusion of partial boundary representations from range images
NASA Astrophysics Data System (ADS)
Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang
1994-10-01
This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.
How to Create a 3D Model from Scanned Data in 5 Easy Steps
NASA Technical Reports Server (NTRS)
Hagen, Richard
2017-01-01
Additive manufacturing is a cost effective way to generate copies of damaged parts for demonstrations. Integrating scanned data of a damaged area into an existing model may be challenging. However, using the relatively inexpensive Nettfab software (from one can generate a "watertight" model that is easy to print.
Scalable Multi-Platform Distribution of Spatial 3d Contents
NASA Astrophysics Data System (ADS)
Klimke, J.; Hagedorn, B.; Döllner, J.
2013-09-01
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
First Prismatic Building Model Reconstruction from Tomosar Point Clouds
NASA Astrophysics Data System (ADS)
Sun, Y.; Shahzad, M.; Zhu, X.
2016-06-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Peters, James R; Campbell, Robert M; Balasubramanian, Sriram
2017-10-03
Generalized Procrustes Analysis (GPA) is a superimposition method used to generate size-invariant distributions of homologous landmark points. Several studies have used GPA to assess the three-dimensional (3D) shapes of or to evaluate sex-related differences in the human brain, skull, rib cage, pelvis and lower limbs. Previous studies of the pediatric thoracic vertebrae suggest that they may undergo changes in shape asa result of normative growth. This study uses GPA and second order polynomial equations to model growth and age- and sex-related changes in shape of the pediatric thoracic spine. We present a thorough analysis of the normative 3D shape, size, and orientation of the pediatric thoracic spine and vertebrae as well as equations which can be used to generate models of the thoracic spine and vertebrae for any age between 1 and 19years. Such models could be used to create more accurate 3D reconstructions of the thoracic spine, generate improved age-specific geometries for finite element models (FEMs) and used to assist clinicians with patient-specific planning and surgical interventions for spine deformity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration.
Ehrhardt, Jan; Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz
2011-02-01
Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory motion model is capable of providing valuable prior knowledge in many fields of applications. We present two examples of possible applications in radiation therapy and image guided diagnosis.
Towards a 3d Spatial Urban Energy Modelling Approach
NASA Astrophysics Data System (ADS)
Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.
2013-09-01
Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies conceptually and practically integrate urban spatial and energy planning approaches. The combined modelling approach that will be developed based on the described sectorial models holds the potential to represent hybrid energy systems coupling distributed generation of electricity with thermal conversion systems.
D Model of AL Zubarah Fortress in Qatar - Terrestrial Laser Scanning VS. Dense Image Matching
NASA Astrophysics Data System (ADS)
Kersten, T.; Mechelke, K.; Maziull, L.
2015-02-01
In September 2011 the fortress Al Zubarah, built in 1938 as a typical Arabic fortress and restored in 1987 as a museum, was recorded by the HafenCity University Hamburg using terrestrial laser scanning with the IMAGER 5006h and digital photogrammetry for the Qatar Museum Authority within the framework of the Qatar Islamic Archaeology and Heritage Project. One goal of the object recording was to provide detailed 2D/3D documentation of the fortress. This was used to complete specific detailed restoration work in the recent years. From the registered laser scanning point clouds several cuttings and 2D plans were generated as well as a 3D surface model by triangle meshing. Additionally, point clouds and surface models were automatically generated from digital imagery from a Nikon D70 using the open-source software Bundler/PMVS2, free software VisualSFM, Autodesk Web Service 123D Catch beta, and low-cost software Agisoft PhotoScan. These outputs were compared with the results from terrestrial laser scanning. The point clouds and surface models derived from imagery could not achieve the same quality of geometrical accuracy as laser scanning (i.e. 1-2 cm).
3D gaze tracking method using Purkinje images on eye optical model and pupil
NASA Astrophysics Data System (ADS)
Lee, Ji Woo; Cho, Chul Woo; Shin, Kwang Yong; Lee, Eui Chul; Park, Kang Ryoung
2012-05-01
Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.96° (0.48 cm) on the X-axis, 1.60° (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space.
Comparison of Image Generation And Processing Techniques For 3D Reconstruction of The Human Skull
2001-10-25
inexpensive Microscribe (3D digitizer) with a standard widely used and expensive CT-Scan and/or MRI for 3D reconstruction of a human skull, which will be... Microscribe 3D digitizing unit and another one using the CT-Scans (2D cross-sections) obtained from a GE scanner. Both models were then subjected to stress...these methods are still elaborate, expensive and not readily accessible. Using the hand-held digitizer, the Microscribe , X, Y and Z coordinates
Numerical simulation of aerodynamic characteristics of multi-element wing with variable flap
NASA Astrophysics Data System (ADS)
Lv, Hongyan; Zhang, Xinpeng; Kuang, Jianghong
2017-10-01
Based on the Reynolds averaged Navier-Stokes equation, the mesh generation technique and the geometric modeling method, the influence of the Spalart-Allmaras turbulence model on the aerodynamic characteristics is investigated. In order to study the typical configuration of aircraft, a similar DLR-F11 wing is selected. Firstly, the 3D model of wing is established, and the 3D model of plane flight, take-off and landing is established. The mesh structure of the flow field is constructed and the mesh is generated by mesh generation software. Secondly, by comparing the numerical simulation with the experimental data, the prediction of the aerodynamic characteristics of the multi section airfoil in takeoff and landing stage is validated. Finally, the two flap deflection angles of take-off and landing are calculated, which provide useful guidance for the aerodynamic characteristics of the wing and the flap angle design of the wing.
A Deformable Generic 3D Model of Haptoral Anchor of Monogenean
Teo, Bee Guan; Dhillon, Sarinder Kaur; Lim, Lee Hong Susan
2013-01-01
In this paper, a digital 3D model which allows for visualisation in three dimensions and interactive manipulation is explored as a tool to help us understand the structural morphology and elucidate the functions of morphological structures of fragile microorganisms which defy live studies. We developed a deformable generic 3D model of haptoral anchor of dactylogyridean monogeneans that can subsequently be deformed into different desired anchor shapes by using direct manipulation deformation technique. We used point primitives to construct the rectangular building blocks to develop our deformable 3D model. Point primitives are manually marked on a 2D illustration of an anchor on a Cartesian graph paper and a set of Cartesian coordinates for each point primitive is manually extracted from the graph paper. A Python script is then written in Blender to construct 3D rectangular building blocks based on the Cartesian coordinates. The rectangular building blocks are stacked on top or by the side of each other following their respective Cartesian coordinates of point primitive. More point primitives are added at the sites in the 3D model where more structural variations are likely to occur, in order to generate complex anchor structures. We used Catmull-Clark subdivision surface modifier to smoothen the surface and edge of the generic 3D model to obtain a smoother and more natural 3D shape and antialiasing option to reduce the jagged edges of the 3D model. This deformable generic 3D model can be deformed into different desired 3D anchor shapes through direct manipulation deformation technique by aligning the vertices (pilot points) of the newly developed deformable generic 3D model onto the 2D illustrations of the desired shapes and moving the vertices until the desire 3D shapes are formed. In this generic 3D model all the vertices present are deployed for displacement during deformation. PMID:24204903
A deformable generic 3D model of haptoral anchor of Monogenean.
Teo, Bee Guan; Dhillon, Sarinder Kaur; Lim, Lee Hong Susan
2013-01-01
In this paper, a digital 3D model which allows for visualisation in three dimensions and interactive manipulation is explored as a tool to help us understand the structural morphology and elucidate the functions of morphological structures of fragile microorganisms which defy live studies. We developed a deformable generic 3D model of haptoral anchor of dactylogyridean monogeneans that can subsequently be deformed into different desired anchor shapes by using direct manipulation deformation technique. We used point primitives to construct the rectangular building blocks to develop our deformable 3D model. Point primitives are manually marked on a 2D illustration of an anchor on a Cartesian graph paper and a set of Cartesian coordinates for each point primitive is manually extracted from the graph paper. A Python script is then written in Blender to construct 3D rectangular building blocks based on the Cartesian coordinates. The rectangular building blocks are stacked on top or by the side of each other following their respective Cartesian coordinates of point primitive. More point primitives are added at the sites in the 3D model where more structural variations are likely to occur, in order to generate complex anchor structures. We used Catmull-Clark subdivision surface modifier to smoothen the surface and edge of the generic 3D model to obtain a smoother and more natural 3D shape and antialiasing option to reduce the jagged edges of the 3D model. This deformable generic 3D model can be deformed into different desired 3D anchor shapes through direct manipulation deformation technique by aligning the vertices (pilot points) of the newly developed deformable generic 3D model onto the 2D illustrations of the desired shapes and moving the vertices until the desire 3D shapes are formed. In this generic 3D model all the vertices present are deployed for displacement during deformation.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
MO-A-9A-01: Innovation in Medical Physics Practice: 3D Printing Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehler, E; Perks, J; Rasmussen, K
2014-06-15
3D printing, also called additive manufacturing, has great potential to advance the field of medicine. Many medical uses have been exhibited from facial reconstruction to the repair of pulmonary obstructions. The strength of 3D printing is to quickly convert a 3D computer model into a physical object. Medical use of 3D models is already ubiquitous with technologies such as computed tomography and magnetic resonance imaging. Thus tailoring 3D printing technology to medical functions has the potential to impact patient care. This session will discuss applications to the field of Medical Physics. Topics discussed will include introduction to 3D printing methodsmore » as well as examples of real-world uses of 3D printing spanning clinical and research practice in diagnostic imaging and radiation therapy. The session will also compare 3D printing to other manufacturing processes and discuss a variety of uses of 3D printing technology outside the field of Medical Physics. Learning Objectives: Understand the technologies available for 3D Printing Understand methods to generate 3D models Identify the benefits and drawbacks to rapid prototyping / 3D Printing Understand the potential issues related to clinical use of 3D Printing.« less
Use of 3D models of vascular rings and slings to improve resident education.
Jones, Trahern W; Seckeler, Michael D
2017-09-01
Three-dimensional (3D) printing is a manufacturing method by which an object is created in an additive process, and can be used with medical imaging data to generate accurate physical reproductions of organs and tissues for a variety of applications. We hypothesized that using 3D printed models of congenital cardiovascular lesions to supplement an educational lecture would improve learners' scores on a board-style examination. Patients with normal and abnormal aortic arches were selected and anonymized to generate 3D printed models. A cohort of pediatric and combined pediatric/emergency medicine residents were then randomized to intervention and control groups. Each participant was given a subjective survey and an objective board-style pretest. Each group received the same 20-minutes lecture on vascular rings and slings. During the intervention group's lecture, 3D printed physical models of each lesion were distributed for inspection. After each lecture, both groups completed the same subjective survey and objective board-style test to assess their comfort with and postlecture knowledge of vascular rings. There were no differences in the basic demographics of the two groups. After the lectures, both groups' subjective comfort levels increased. Both groups' scores on the objective test improved, but the intervention group scored higher on the posttest. This study demonstrated a measurable gain in knowledge about vascular rings and pulmonary artery slings with the addition of 3D printed models of the defects. Future applications of this teaching modality could extend to other congenital cardiac lesions and different learners. © 2017 Wiley Periodicals, Inc.
Jung, Joon -Hee
2016-10-11
Here, the global atmospheric models based on the Multi-scale Modeling Framework (MMF) are able to explicitly resolve subgrid-scale processes by using embedded 2-D Cloud-Resolving Models (CRMs). Up to now, however, those models do not include the orographic effects on the CRM grid scale. This study shows that the effects of CRM grid-scale orography can be simulated reasonably well by the Quasi-3-D MMF (Q3D MMF), which has been developed as a second-generation MMF. In the Q3D framework, the surface topography can be included in the CRM component by using a block representation of the mountains, so that no smoothing of themore » topographic height is necessary. To demonstrate the performance of such a model, the orographic effects over a steep mountain are simulated in an idealized experimental setup with each of the Q3D MMF and the full 3-D CRM. The latter is used as a benchmark. Comparison of the results shows that the Q3D MMF is able to reproduce the horizontal distribution of orographic precipitation and the flow changes around mountains as simulated by the 3-D CRM, even though the embedded CRMs of the Q3D MMF recognize only some aspects of the complex 3-D topography. It is also shown that the use of 3-D CRMs in the Q3D framework, rather than 2-D CRMs, has positive impacts on the simulation of wind fields but does not substantially change the simulated precipitation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Joon -Hee
Here, the global atmospheric models based on the Multi-scale Modeling Framework (MMF) are able to explicitly resolve subgrid-scale processes by using embedded 2-D Cloud-Resolving Models (CRMs). Up to now, however, those models do not include the orographic effects on the CRM grid scale. This study shows that the effects of CRM grid-scale orography can be simulated reasonably well by the Quasi-3-D MMF (Q3D MMF), which has been developed as a second-generation MMF. In the Q3D framework, the surface topography can be included in the CRM component by using a block representation of the mountains, so that no smoothing of themore » topographic height is necessary. To demonstrate the performance of such a model, the orographic effects over a steep mountain are simulated in an idealized experimental setup with each of the Q3D MMF and the full 3-D CRM. The latter is used as a benchmark. Comparison of the results shows that the Q3D MMF is able to reproduce the horizontal distribution of orographic precipitation and the flow changes around mountains as simulated by the 3-D CRM, even though the embedded CRMs of the Q3D MMF recognize only some aspects of the complex 3-D topography. It is also shown that the use of 3-D CRMs in the Q3D framework, rather than 2-D CRMs, has positive impacts on the simulation of wind fields but does not substantially change the simulated precipitation.« less
NASA Astrophysics Data System (ADS)
Jung, Joon-Hee
2016-12-01
The global atmospheric models based on the Multi-scale Modeling Framework (MMF) are able to explicitly resolve subgrid-scale processes by using embedded 2-D Cloud-Resolving Models (CRMs). Up to now, however, those models do not include the orographic effects on the CRM grid scale. This study shows that the effects of CRM grid-scale orography can be simulated reasonably well by the Quasi-3-D MMF (Q3D MMF), which has been developed as a second-generation MMF. In the Q3D framework, the surface topography can be included in the CRM component by using a block representation of the mountains, so that no smoothing of the topographic height is necessary. To demonstrate the performance of such a model, the orographic effects over a steep mountain are simulated in an idealized experimental setup with each of the Q3D MMF and the full 3-D CRM. The latter is used as a benchmark. Comparison of the results shows that the Q3D MMF is able to reproduce the horizontal distribution of orographic precipitation and the flow changes around mountains as simulated by the 3-D CRM, even though the embedded CRMs of the Q3D MMF recognize only some aspects of the complex 3-D topography. It is also shown that the use of 3-D CRMs in the Q3D framework, rather than 2-D CRMs, has positive impacts on the simulation of wind fields but does not substantially change the simulated precipitation.
NASA Astrophysics Data System (ADS)
Santos-Filho, Osvaldo A.; Mishra, Rama K.; Hopfinger, A. J.
2001-09-01
Free energy force field (FEFF) 3D-QSAR analysis was used to construct ligand-receptor binding models for a set of 18 structurally diverse antifolates including pyrimethamine, cycloguanil, methotrexate, aminopterin and trimethoprim, and 13 pyrrolo[2,3-d]pyrimidines. The molecular target (`receptor') used was a 3D-homology model of a specific mutant type of Plasmodium falciparum (Pf) dihydrofolate reductase (DHFR). The dependent variable of the 3D-QSAR models is the IC50 inhibition constant for the specific mutant type of PfDHFR. The independent variables of the 3D-QSAR models (the descriptors) are scaled energy terms of a modified first-generation AMBER force field combined with a hydration shell aqueous solvation model and a collection of 2D-QSAR descriptors often used in QSAR studies. Multiple temperature molecular dynamics simulation (MDS) and the genetic function approximation (GFA) were employed using partial least square (PLS) and multidimensional linear regressions as the fitting functions to develop FEFF 3D-QSAR models for the binding process. The significant FEFF energy terms in the best 3D-QSAR models include energy contributions of the direct ligand-receptor interaction. Some changes in conformational energy terms of the ligand due to binding to the enzyme are also found to be important descriptors. The FEFF 3D-QSAR models indicate some structural features perhaps relevant to the mechanism of resistance of the PfDHFR to current antimalarials. The FEFF 3D-QSAR models are also compared to receptor-independent (RI) 4D-QSAR models developed in an earlier study and subsequently refined using recently developed generalized alignment rules.
Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.
Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa
2017-09-01
Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a noninvasive tool to efficiently localize arrhythmias in 3-D. © 2017 American Association of Physicists in Medicine.
Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Jacques, Diederik; Linde, Niklas
2018-01-01
Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2-D and 3-D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2-D and 3-D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2-D steady state flow and 3-D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2-D case, the inversion rapidly explores the posterior model distribution. For the 3-D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.
NASA Astrophysics Data System (ADS)
Dahdouh, S.; Varsier, N.; Serrurier, A.; De la Plata, J.-P.; Anquez, J.; Angelini, E. D.; Wiart, J.; Bloch, I.
2014-08-01
Fetal dosimetry studies require the development of accurate numerical 3D models of the pregnant woman and the fetus. This paper proposes a 3D articulated fetal growth model covering the main phases of pregnancy and a pregnant woman model combining the utero-fetal structures and a deformable non-pregnant woman body envelope. The structures of interest were automatically or semi-automatically (depending on the stage of pregnancy) segmented from a database of images and surface meshes were generated. By interpolating linearly between fetal structures, each one can be generated at any age and in any position. A method is also described to insert the utero-fetal structures in the maternal body. A validation of the fetal models is proposed, comparing a set of biometric measurements to medical reference charts. The usability of the pregnant woman model in dosimetry studies is also investigated, with respect to the influence of the abdominal fat layer.
Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments
2014-12-18
expensive travel and on-site visits. Different applications require models of different complexities, both with and without furniture geometry. The...environment and to localize the system in the environment over time. The datasets shown in this paper were generated by a backpack -mounted system that uses 2D...voxel is found to intersect the line segment from a scanner to a corresponding scan point. If a laser passes through a voxel, that voxel is considered
Crossing the Virtual World Barrier with OpenAvatar
NASA Technical Reports Server (NTRS)
Joy, Bruce; Kavle, Lori; Tan, Ian
2012-01-01
There are multiple standards and formats for 3D models in virtual environments. The problem is that there is no open source platform for generating models out of discrete parts; this results in the process of having to "reinvent the wheel" when new games, virtual worlds and simulations want to enable their users to create their own avatars or easily customize in-world objects. OpenAvatar is designed to provide a framework to allow artists and programmers to create reusable assets which can be used by end users to generate vast numbers of complete models that are unique and functional. OpenAvatar serves as a framework which facilitates the modularization of 3D models allowing parts to be interchanged within a set of logical constraints.
A database for reproducible manipulation research: CapriDB - Capture, Print, Innovate.
Pokorny, Florian T; Bekiroglu, Yasemin; Pauwels, Karl; Butepage, Judith; Scherer, Clara; Kragic, Danica
2017-04-01
We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB - an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.
Tam, Matthew David; Laycock, Stephen David; Jayne, David; Babar, Judith; Noble, Brendon
2013-08-01
This report concerns a 67 year old male patient with known advanced relapsing polychondritis complicated by tracheobronchial chondromalacia who is increasingly symptomatic and therapeutic options such as tracheostomy and stenting procedures are being considered. The DICOM files from the patient's dynamic chest CT in its inspiratory and expiratory phases were used to generate stereolithography (STL) files and hence print out 3-D models of the patient's trachea and central airways. The 4 full-sized models allowed better understanding of the extent and location of any stenosis or malacic change and should aid any planned future stenting procedures. The future possibility of using the models as scaffolding to generate a new cartilaginous upper airway using regenerative medical techniques is also discussed.
NASA Astrophysics Data System (ADS)
Bognot, J. R.; Candido, C. G.; Blanco, A. C.; Montelibano, J. R. Y.
2018-05-01
Monitoring the progress of building's construction is critical in construction management. However, measuring the building construction's progress are still manual, time consuming, error prone, and impose tedious process of analysis leading to delays, additional costings and effort. The main goal of this research is to develop a methodology for building construction progress monitoring based on 3D as-built model of the building from unmanned aerial system (UAS) images, 4D as-planned model (with construction schedule integrated) and, GIS analysis. Monitoring was done by capturing videos of the building with a camera-equipped UAS. Still images were extracted, filtered, bundle-adjusted, and 3D as-built model was generated using open source photogrammetric software. The as-planned model was generated from digitized CAD drawings using GIS. The 3D as-built model was aligned with the 4D as-planned model of building formed from extrusion of building elements, and integration of the construction's planned schedule. The construction progress is visualized via color-coding the building elements in the 3D model. The developed methodology was conducted and applied from the data obtained from an actual construction site. Accuracy in detecting `built' or `not built' building elements ranges from 82-84 % and precision of 50-72 %. Quantified progress in terms of the number of building elements are 21.31% (November 2016), 26.84 % (January 2017) and 44.19 % (March 2017). The results can be used as an input for progress monitoring performance of construction projects and improving related decision-making process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, K; Barragan Montero, A; Di Perri, D
Purpose: The shift in mean position of a moving tumor also known as “baseline shift”, has been modeled, in order to automatically generate uncertainty scenarios for the assessment and robust optimization of proton therapy treatments in lung cancer. Methods: An average CT scan and a Mid-Position CT scan (MidPCT) of the patient at the planning time are first generated from a 4D-CT data. The mean position of the tumor along the breathing cycle is represented by the GTV contour in the MidPCT. Several studies reported both systematic and random variations of the mean tumor position from fraction to fraction. Ourmore » model can simulate this baseline shift by generating a local deformation field that moves the tumor on all phases of the 4D-CT, without creating any non-physical artifact. The deformation field is comprised of normal and tangential components with respect to the lung wall in order to allow the tumor to slip within the lung instead of deforming the lung surface. The deformation field is eventually smoothed in order to enforce its continuity. Two 4D-CT series acquired at 1 week of interval were used to validate the model. Results: Based on the first 4D-CT set, the model was able to generate a third 4D-CT that reproduced the 5.8 mm baseline-shift measured in the second 4D-CT. Water equivalent thickness (WET) of the voxels have been computed for the 3 average CTs. The root mean square deviation of the WET in the GTV is 0.34 mm between week 1 and week 2, and 0.08 mm between the simulated data and week 2. Conclusion: Our model can be used to automatically generate uncertainty scenarios for robustness analysis of a proton therapy plan. The generated scenarios can also feed a TPS equipped with a robust optimizer. Kevin Souris, Ana Barragan, and Dario Di Perri are financially supported by Televie Grants from F.R.S.-FNRS.« less
Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio
2012-01-01
Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199
3D foot shape generation from 2D information.
Luximon, Ameersing; Goonetilleke, Ravindra S; Zhang, Ming
2005-05-15
Two methods to generate an individual 3D foot shape from 2D information are proposed. A standard foot shape was first generated and then scaled based on known 2D information. In the first method, the foot outline and the foot height were used, and in the second, the foot outline and the foot profile were used. The models were developed using 40 participants and then validated using a different set of 40 participants. Results show that each individual foot shape can be predicted within a mean absolute error of 1.36 mm for the left foot and 1.37 mm for the right foot using the first method, and within a mean absolute error of 1.02 mm for the left foot and 1.02 mm for the right foot using the second method. The second method shows somewhat improved accuracy even though it requires two images. Both the methods are relatively cheaper than using a scanner to determine the 3D foot shape for custom footwear design.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong
2005-01-01
Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.
NASA Astrophysics Data System (ADS)
Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina
2013-12-01
The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na podstawie współrzędnych odczytanych z chmury punktów NSL, wygenerowanie modelu 3D w strukturze TIN z wykorzystaniem metod automatycznej korelacji obrazów. Opracowane modele 3D zostały porównane z modelami 3D tych samych obiektów, dla których została przeprowadzona samokalibracja metodą wiązek. W celu oceny dokładności opracowanych modeli 3D obiektów architektonicznych wykorzystano punkty naziemnego skaningu laserowego. Do oceny dokładności wykorzystano metodę najkrótszej odległości. Analiza dokładności wykazała, że dokładność modeli 3D generowanych na podstawie danych wideo wynosi około 0.06 ÷ 0.13m względem danych NSL.
3D MHD Models of Active Region Loops
NASA Technical Reports Server (NTRS)
Ofman, Leon
2004-01-01
Present imaging and spectroscopic observations of active region loops allow to determine many physical parameters of the coronal loops, such as the density, temperature, velocity of flows in loops, and the magnetic field. However, due to projection effects many of these parameters remain ambiguous. Three dimensional imaging in EUV by the STEREO spacecraft will help to resolve the projection ambiguities, and the observations could be used to setup 3D MHD models of active region loops to study the dynamics and stability of active regions. Here the results of 3D MHD models of active region loops are presented, and the progress towards more realistic 3D MHD models of active regions. In particular the effects of impulsive events on the excitation of active region loop oscillations, and the generation, propagations and reflection of EIT waves are shown. It is shown how 3D MHD models together with 3D EUV observations can be used as a diagnostic tool for active region loop physical parameters, and to advance the science of the sources of solar coronal activity.
3D Volume Rendering and 3D Printing (Additive Manufacturing).
Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T
2018-07-01
Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.
Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.
2018-05-01
Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.
Simulation of dense amorphous polymers by generating representative atomistic models
NASA Astrophysics Data System (ADS)
Curcó, David; Alemán, Carlos
2003-08-01
A method for generating atomistic models of dense amorphous polymers is presented. The generated models can be used as starting structures of Monte Carlo and molecular dynamics simulations, but also are suitable for the direct evaluation physical properties. The method is organized in a two-step procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, an iterative algorithm is applied to relax the nonbonding interactions. In order to check the performance of the method we examined structure-dependent properties for three polymeric systems: polyethyelene (ρ=0.85 g/cm3), poly(L,D-lactic) acid (ρ=1.25 g/cm3), and polyglycolic acid (ρ=1.50 g/cm3). The method successfully generated representative packings for such dense systems using minimum computational resources.
NASA Astrophysics Data System (ADS)
Nishino, Hitoshi; Rajpoot, Subhash
2016-05-01
We present electric-magnetic (EM)-duality formulations for non-Abelian gauge groups with N =1 supersymmetry in D =3 +3 and 5 +5 space-time dimensions. We show that these systems generate self-dual N =1 supersymmetric Yang-Mills (SDSYM) theory in D =2 +2 . For a N =2 supersymmetric EM-dual system in D =3 +3 , we have the Yang-Mills multiplet (Aμ I,λA I) and a Hodge-dual multiplet (Bμν ρ I,χA I) , with an auxiliary tensors Cμν ρ σ I and Kμ ν. Here, I is the adjoint index, while A is for the doublet of S p (1 ). The EM-duality conditions are Fμν I=(1 /4 !)ɛμν ρ σ τ λGρσ τ λ I with its superpartner duality condition λA I=-χA I . Upon appropriate dimensional reduction, this system generates SDSYM in D =2 +2 . This system is further generalized to D =5 +5 with the EM-duality condition Fμν I=(1 /8 !)ɛμν ρ1⋯ρ8Gρ1⋯ρ8 I with its superpartner condition λI=-χI . Upon appropriate dimensional reduction, this theory also generates SDSYM in D =2 +2 . As long as we maintain Lorentz covariance, D =5 +5 dimensions seems to be the maximal space-time dimensions that generate SDSYM in D =2 +2 . Namely, EM-dual system in D =5 +5 serves as the Master Theory of all supersymmetric integrable models in dimensions 1 ≤D ≤3 .
Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas
Moreau, Julien; Ambellouis, Sébastien; Ruichek, Yassine
2017-01-01
A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density). PMID:28106746
NASA Astrophysics Data System (ADS)
Oniga, E.; Chirilă, C.; Stătescu, F.
2017-02-01
Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner software, resulting a CAD (Computer Aided Design) model. The results showed the high potential of using low-cost UASs for building 3D model creation and if the building 3D model is created based on its characteristic points the accuracy is significantly improved.
Hemodynamics model of fluid–solid interaction in internal carotid artery aneurysms
Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2010-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography. PMID:20812022
Hemodynamics model of fluid-solid interaction in internal carotid artery aneurysms.
Bai-Nan, Xu; Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2011-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography.
Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim
NASA Astrophysics Data System (ADS)
Becker, S.; Peter, M.; Fritsch, D.
2015-03-01
The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.
Gledhill, Karl; Guo, Zongyou; Umegaki-Arao, Noriko; Higgins, Claire A.; Itoh, Munenari; Christiano, Angela M.
2015-01-01
The current utility of 3D skin equivalents is limited by the fact that existing models fail to recapitulate the cellular complexity of human skin. They often contain few cell types and no appendages, in part because many cells found in the skin are difficult to isolate from intact tissue and cannot be expanded in culture. Induced pluripotent stem cells (iPSCs) present an avenue by which we can overcome this issue due to their ability to be differentiated into multiple cell types in the body and their unlimited growth potential. We previously reported generation of the first human 3D skin equivalents from iPSC-derived fibroblasts and iPSC-derived keratinocytes, demonstrating that iPSCs can provide a foundation for modeling a complex human organ such as skin. Here, we have increased the complexity of this model by including additional iPSC-derived melanocytes. Epidermal melanocytes, which are largely responsible for skin pigmentation, represent the second most numerous cell type found in normal human epidermis and as such represent a logical next addition. We report efficient melanin production from iPSC-derived melanocytes and transfer within an entirely iPSC-derived epidermal-melanin unit and generation of the first functional human 3D skin equivalents made from iPSC-derived fibroblasts, keratinocytes and melanocytes. PMID:26308443
Vance, Marina E; Pegues, Valerie; Van Montfrans, Schuyler; Leng, Weinan; Marr, Linsey C
2017-09-05
Three-dimensional (3D) printers are known to emit aerosols, but questions remain about their composition and the fundamental processes driving emissions. The objective of this work was to characterize the aerosol emissions from the operation of a fuse-deposition modeling 3D printer. We modeled the time- and size-resolved emissions of submicrometer aerosols from the printer in a chamber study, gained insight into the chemical composition of emitted aerosols using Raman spectroscopy, and measured the potential for exposure to the aerosols generated by 3D printers under real-use conditions in a variety of indoor environments. The average aerosol emission rates ranged from ∼10 8 to ∼10 11 particles min -1 , and the rates varied over the course of a print job. Acrylonitrile butadiene styrene (ABS) filaments generated the largest number of aerosols, and wood-infused polylactic acid (PLA) filaments generated the smallest amount. The emission factors ranged from 6 × 10 8 to 6 × 10 11 per gram of printed part, depending on the type of filament used. For ABS, the Raman spectra of the filament and the printed part were indistinguishable, while the aerosol spectra lacked important peaks corresponding to styrene and acrylonitrile, which are both present in ABS. This observation suggests that aerosols are not a result of volatilization and subsequent nucleation of ABS or direct release of ABS aerosols.
Producing genome structure populations with the dynamic and automated PGS software.
Hua, Nan; Tjong, Harianto; Shin, Hanjun; Gong, Ke; Zhou, Xianghong Jasmine; Alber, Frank
2018-05-01
Chromosome conformation capture technologies such as Hi-C are widely used to investigate the spatial organization of genomes. Because genome structures can vary considerably between individual cells of a population, interpreting ensemble-averaged Hi-C data can be challenging, in particular for long-range and interchromosomal interactions. We pioneered a probabilistic approach for the generation of a population of distinct diploid 3D genome structures consistent with all the chromatin-chromatin interaction probabilities from Hi-C experiments. Each structure in the population is a physical model of the genome in 3D. Analysis of these models yields new insights into the causes and the functional properties of the genome's organization in space and time. We provide a user-friendly software package, called PGS, which runs on local machines (for practice runs) and high-performance computing platforms. PGS takes a genome-wide Hi-C contact frequency matrix, along with information about genome segmentation, and produces an ensemble of 3D genome structures entirely consistent with the input. The software automatically generates an analysis report, and provides tools to extract and analyze the 3D coordinates of specific domains. Basic Linux command-line knowledge is sufficient for using this software. A typical running time of the pipeline is ∼3 d with 300 cores on a computer cluster to generate a population of 1,000 diploid genome structures at topological-associated domain (TAD)-level resolution.
3D-QSAR studies on 1,2,4-triazolyl 5-azaspiro [2.4]-heptanes as D3R antagonists
NASA Astrophysics Data System (ADS)
Zhang, Xin; Zhang, Hui
2018-07-01
Dopamine D3 receptor has become an attractive target in the treatment of abused drugs. 3D-QSAR studies were performed on a novel series of D3 receptor antagonists, 1,2,4-triazolyl 5-azaspiro [2.4]-heptanes, using CoMFA and CoMSIA methods. Two predictive 3D-QSAR models have been generated for the modified design of D3R antagonists. Based on the steric, electrostatic, hydrophobic and hydrogen-bond acceptor information of contour maps, key structural factors affecting the bioactivity were explored. This work gives helpful suggestions on the design of novel D3R antagonists with increased activities.
The 3-dimensional cellular automata for HIV infection
NASA Astrophysics Data System (ADS)
Mo, Youbin; Ren, Bin; Yang, Wencao; Shuai, Jianwei
2014-04-01
The HIV infection dynamics is discussed in detail with a 3-dimensional cellular automata model in this paper. The model can reproduce the three-phase development, i.e., the acute period, the asymptotic period and the AIDS period, observed in the HIV-infected patients in a clinic. We show that the 3D HIV model performs a better robustness on the model parameters than the 2D cellular automata. Furthermore, we reveal that the occurrence of a perpetual source to successively generate infectious waves to spread to the whole system drives the model from the asymptotic state to the AIDS state.
NASA Astrophysics Data System (ADS)
Crosta, G.; Imposimato, S.; Roddeman, D.; Frattini, P.
2012-04-01
Fast moving landslides can be originated along slopes in mountainous terrains with natural and artificial lakes, or fjords at the slope foot. This landslides can reach extremely high speed and the impact with the immobile reservoir water can be influenced by the local topography and the landslide mass profile. The impact can generate large impulse waves and landslide tsunami. Initiation, propagation and runup are the three phases that need to be considered. The landslide evolution and the consequent wave can be controlled by the initial mass position (subaerial, partially or completely submerged), the landslide speed, the type of material, the subaerial and subaqueous slope geometry, the landslide depth and length at the impact, and the water depth. Extreme events have been caused by subaerial landslides: the 1963 Vajont rockslide (Italy), the 1958 Lituya Bay event (Alaska), the Tafjord and the Loen multiple events event (Norway), also from volcanic collapses (Hawaii and Canary islands). Various researchers completed a systematic experimental work on 2D and 3D wave generation and propagation (Kamphuis and Bowering, 1970; Huber, 1980; Müller, 1995; Huber and Hager, 1997; Fritz, 2002; Zweifel, 2004; Panizzo et al., 2005; Heller, 2007; Heller and Kinnear, 2010; Sælevik et al., 2009), using both rigid blocks and deformable granular" masses. Model data and results have been used to calibrate and validate numerical modelling tools (Harbitz, 1992; Jiang and LeBlond, 1993; Grilli et al., 2002; Grilli and Watts, 2005; Lynett and Liu, 2005; Tinti et al., 2006; Abadie et al., 2010) generally considering simplified rheologies (e.g. viscous rheologies) for subaerial subaqueous spreading. We use a FEM code (Roddeman, 2011; Crosta et al., 2006, 2009, 2010, 2011) adopting an Eulerian-Lagrangian approach to give accurate results for large deformations. We model both 2D and fully 3D events considering different settings. The material is considered as a fully deformable elasto-plastic continuum and water as nearly incompressible. In particular we modeled the Vajont rockslide both in 2D and 3D considering the landslide water interaction. More simulations have been performed to validate the model against 2D and 3D tank experiments considering different slope geometries and water depth.
Hepatic differentiation of human iPSCs in different 3D models: A comparative study
Brzeszczynska, Joanna; Knöspel, Fanny; Armstrong, Lyle; Lako, Majlinda; Greuel, Selina; Damm, Georg; Ludwig-Schwellinger, Eva; Deschl, Ulrich; Ross, James A.
2017-01-01
Human induced pluripotent stem cells (hiPSCs) are a promising source from which to derive distinct somatic cell types for in vitro or clinical use. Existent protocols for hepatic differentiation of hiPSCs are primarily based on 2D cultivation of the cells. In the present study, the authors investigated the generation of hiPSC-derived hepatocyte-like cells using two different 3D culture systems: A 3D scaffold-free microspheroid culture system and a 3D hollow-fiber perfusion bioreactor. The differentiation outcome in these 3D systems was compared with that in conventional 2D cultures, using primary human hepatocytes as a control. The evaluation was made based on specific mRNA expression, protein secretion, antigen expression and metabolic activity. The expression of α-fetoprotein was lower, while cytochrome P450 1A2 or 3A4 activities were higher in the 3D culture systems as compared with the 2D differentiation system. Cells differentiated in the 3D bioreactor showed an increased expression of albumin and hepatocyte nuclear factor 4α, as well as secretion of α-1-antitrypsin as compared with the 2D differentiation system, suggesting a higher degree of maturation. In contrast, the 3D scaffold-free microspheroid culture provides an easy and robust method to generate spheroids of a defined size for screening applications, while the bioreactor culture model provides an instrument for complex investigations under physiological-like conditions. In conclusion, the present study introduces two 3D culture systems for stem cell derived hepatic differentiation each demonstrating advantages for individual applications as well as benefits in comparison with 2D cultures. PMID:29039463
NASA Astrophysics Data System (ADS)
Capocchiano, F.; Ravanelli, R.; Crespi, M.
2017-11-01
Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.
3D SPH numerical simulation of the wave generated by the Vajont rockslide
NASA Astrophysics Data System (ADS)
Vacondio, R.; Mignosa, P.; Pagani, S.
2013-09-01
A 3D numerical modeling of the wave generated by the Vajont slide, one of the most destructive ever occurred, is presented in this paper. A meshless Lagrangian Smoothed Particle Hydrodynamics (SPH) technique was adopted to simulate the highly fragmented violent flow generated by the falling slide in the artificial reservoir. The speed-up achievable via General Purpose Graphic Processing Units (GP-GPU) allowed to adopt the adequate resolution to describe the phenomenon. The comparison with the data available in literature showed that the results of the numerical simulation reproduce satisfactorily the maximum run-up, also the water surface elevation in the residual lake after the event. Moreover, the 3D velocity field of the flow during the event and the discharge hydrograph which overtopped the dam, were obtained.
Fibroblasts Lead the Way: A Unified View of 3D Cell Motility.
Petrie, Ryan J; Yamada, Kenneth M
2015-11-01
Primary human fibroblasts are remarkably adaptable, able to migrate in differing types of physiological 3D tissue and on rigid 2D tissue culture surfaces. The crawling behavior of these and other vertebrate cells has been studied intensively, which has helped generate the concept of the cell motility cycle as a comprehensive model of 2D cell migration. However, this model fails to explain how cells force their large nuclei through the confines of a 3D matrix environment and why primary fibroblasts can use more than one mechanism to move in 3D. Recent work shows that the intracellular localization of myosin II activity is governed by cell-matrix interactions to both force the nucleus through the extracellular matrix (ECM) and dictate the type of protrusions used to migrate in 3D. Published by Elsevier Ltd.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Modelling of aortic aneurysm and aortic dissection through 3D printing.
Ho, Daniel; Squelch, Andrew; Sun, Zhonghua
2017-03-01
The aim of this study was to assess if the complex anatomy of aortic aneurysm and aortic dissection can be accurately reproduced from a contrast-enhanced computed tomography (CT) scan into a three-dimensional (3D) printed model. Contrast-enhanced cardiac CT scans from two patients were post-processed and produced as 3D printed thoracic aorta models of aortic aneurysm and aortic dissection. The transverse diameter was measured at five anatomical landmarks for both models, compared across three stages: the original contrast-enhanced CT images, the stereolithography (STL) format computerised model prepared for 3D printing and the contrast-enhanced CT of the 3D printed model. For the model with aortic dissection, measurements of the true and false lumen were taken and compared at two points on the descending aorta. Three-dimensional printed models were generated with strong and flexible plastic material with successful replication of anatomical details of aortic structures and pathologies. The mean difference in transverse vessel diameter between the contrast-enhanced CT images before and after 3D printing was 1.0 and 1.2 mm, for the first and second models respectively (standard deviation: 1.0 mm and 0.9 mm). Additionally, for the second model, the mean luminal diameter difference between the 3D printed model and CT images was 0.5 mm. Encouraging results were achieved with regards to reproducing 3D models depicting aortic aneurysm and aortic dissection. Variances in vessel diameter measurement outside a standard deviation of 1 mm tolerance indicate further work is required into the assessment and accuracy of 3D model reproduction. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Three-Dimensional Modeling of Quasi-Homologous Solar Jets
NASA Technical Reports Server (NTRS)
Pariat, E.; Antiochos, S. K.; DeVore, C. R.
2010-01-01
Recent solar observations (e.g., obtained with Hinode and STEREO) have revealed that coronal jets are a more frequent phenomenon than previously believed. This higher frequency results, in part, from the fact that jets exhibit a homologous behavior: successive jets recur at the same location with similar morphological features. We present the results of three-dimensional (31)) numerical simulations of our model for coronal jets. This study demonstrates the ability of the model to generate recurrent 3D untwisting quasi-homologous jets when a stress is constantly applied at the photospheric boundary. The homology results from the property of the 3D null-point system to relax to a state topologically similar to its initial configuration. In addition, we find two distinct regimes of reconnection in the simulations: an impulsive 3D mode involving a helical rotating current sheet that generates the jet, and a quasi-steady mode that occurs in a 2D-like current sheet located along the fan between the sheared spines. We argue that these different regimes can explain the observed link between jets and plumes.
3D printed renal cancer models derived from MRI data: application in pre-surgical planning.
Wake, Nicole; Rude, Temitope; Kang, Stella K; Stifelman, Michael D; Borin, James F; Sodickson, Daniel K; Huang, William C; Chandarana, Hersh
2017-05-01
To determine whether patient-specific 3D printed renal tumor models change pre-operative planning decisions made by urological surgeons in preparation for complex renal mass surgical procedures. From our ongoing IRB approved study on renal neoplasms, ten renal mass cases were retrospectively selected based on Nephrometry Score greater than 5 (range 6-10). A 3D post-contrast fat-suppressed gradient-echo T1-weighted sequence was used to generate 3D printed models. The cases were evaluated by three experienced urologic oncology surgeons in a randomized fashion using (1) imaging data on PACS alone and (2) 3D printed model in addition to the imaging data. A questionnaire regarding surgical approach and planning was administered. The presumed pre-operative approaches with and without the model were compared. Any change between the presumed approaches and the actual surgical intervention was recorded. There was a change in planned approach with the 3D printed model for all ten cases with the largest impact seen regarding decisions on transperitoneal or retroperitoneal approach and clamping, with changes seen in 30%-50% of cases. Mean parenchymal volume loss for the operated kidney was 21.4%. Volume losses >20% were associated with increased ischemia times and surgeons tended to report a different approach with the use of the 3D model compared to that with imaging alone in these cases. The 3D printed models helped increase confidence regarding the chosen operative procedure in all cases. Pre-operative physical 3D models created from MRI data may influence surgical planning for complex kidney cancer.
The study of early human embryos using interactive 3-dimensional computer reconstructions.
Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C
1997-07-01
Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.
D Model Generation from Uav: Historical Mosque (masjid LAMA Nilai)
NASA Astrophysics Data System (ADS)
Nasir, N. H. Mohd; Tahar, K. N.
2017-08-01
Preserving cultural heritage and historic sites is an important issue. These sites are subjected to erosion and vandalism, and, as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using the 3-D model building technology as they currently are, so that preservationists can track changes, foresee structural problems, and allow a wider audience to "virtually" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This study discusses new methods that can reduce the time to build a model using the Unmanned Aerial Vehicle method. This study aims to develop a 3D model of a historical mosque using UAV photogrammetry. In order to achieve this, the data acquisition set of Masjid Lama Nilai, Negeri Sembilan was captured by using an Unmanned Aerial Vehicle. In addition, accuracy assessment between the actual and measured values is made. Besides that, a comparison between the rendering 3D model and texturing 3D model is also carried out through this study.
POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.
Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei
2018-03-01
Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.
More-Realistic Digital Modeling of a Human Body
NASA Technical Reports Server (NTRS)
Rogge, Renee
2010-01-01
A MATLAB computer program has been written to enable improved (relative to an older program) modeling of a human body for purposes of designing space suits and other hardware with which an astronaut must interact. The older program implements a kinematic model based on traditional anthropometric measurements that do provide important volume and surface information. The present program generates a three-dimensional (3D) whole-body model from 3D body-scan data. The program utilizes thin-plate spline theory to reposition the model without need for additional scans.
LayTracks3D: A new approach for meshing general solids using medial axis transform
Quadros, William Roshan
2015-08-22
This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less
3D thermal modeling of TRISO fuel coupled with neutronic simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jianwei; Uddin, Rizwan
2010-01-01
The Very High Temperature Gas Reactor (VHTR) is widely considered as one of the top candidates identified in the Next Generation Nuclear Power-plant (NGNP) Technology Roadmap under the U.S . Depanment of Energy's Generation IV program. TRlSO particle is a common element among different VHTR designs and its performance is critical to the safety and reliability of the whole reactor. A TRISO particle experiences complex thermo-mechanical changes during reactor operation in high temperature and high burnup conditions. TRISO fuel performance analysis requires evaluation of these changes on micro scale. Since most of these changes are temperature dependent, 3D thermal modelingmore » of TRISO fuel is a crucial step of the whole analysis package. In this paper, a 3D numerical thermal model was developed to calculate temperature distribution inside TRISO and pebble under different scenarios. 3D simulation is required because pebbles or TRISOs are always subjected to asymmetric thermal conditions since they are randomly packed together. The numerical model was developed using finite difference method and it was benchmarked against ID analytical results and also results reported from literature. Monte-Carlo models were set up to calculate radial power density profile. Complex convective boundary condition was applied on the pebble outer surface. Three reactors were simulated using this model to calculate temperature distribution under different power levels. Two asymmetric boundary conditions were applied to the pebble to test the 3D capabilities. A gas bubble was hypothesized inside the TRISO kernel and 3D simulation was also carried out under this scenario. Intuition-coherent results were obtained and reported in this paper.« less
Sergeyev, Ivan; Moyna, Guillermo
2005-05-02
A novel method for the determination of the three-dimensional (3D) structure of oligosaccharides in the solid state using experimental 13C NMR data is presented. The approach employs this information, combined with 13C chemical shift surfaces (CSSs) for the glycosidic bond carbons in the generation of NMR pseudopotential energy functions suitable for use as constraints in molecular modeling simulations. Application of the method to trehalose, cellobiose, and cellotetraose produces 3D models that agree remarkably well with the reported X-ray structures, with phi and psi dihedral angles that are within 10 degrees from the ones observed in the crystals. The usefulness of the approach is further demonstrated in the determination of the 3D structure of the cellohexaose, an hexasaccharide for which no X-ray data has been reported, as well as in the generation of accurate structural models for cellulose II and amylose V6.
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Tissue vascularization through 3D printing: Will technology bring us flow?
Paulsen, S J; Miller, J S
2015-05-01
Though in vivo models provide the most physiologically relevant environment for studying tissue function, in vitro studies provide researchers with explicit control over experimental conditions and the potential to develop high throughput testing methods. In recent years, advancements in developmental biology research and imaging techniques have significantly improved our understanding of the processes involved in vascular development. However, the task of recreating the complex, multi-scale vasculature seen in in vivo systems remains elusive. 3D bioprinting offers a potential method to generate controlled vascular networks with hierarchical structure approaching that of in vivo networks. Bioprinting is an interdisciplinary field that relies on advances in 3D printing technology along with advances in imaging and computational modeling, which allow researchers to monitor cellular function and to better understand cellular environment within the printed tissue. As bioprinting technologies improve with regards to resolution, printing speed, available materials, and automation, 3D printing could be used to generate highly controlled vascularized tissues in a high throughput manner for use in regenerative medicine and the development of in vitro tissue models for research in developmental biology and vascular diseases. © 2015 Wiley Periodicals, Inc.
Development of a High Resolution 3D Infant Stomach Model for Surgical Planning
NASA Astrophysics Data System (ADS)
Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.
Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
A HWIL test facility of infrared imaging laser radar using direct signal injection
NASA Astrophysics Data System (ADS)
Wang, Qian; Lu, Wei; Wang, Chunhui; Wang, Qi
2005-01-01
Laser radar has been widely used these years and the hardware-in-the-loop (HWIL) testing of laser radar become important because of its low cost and high fidelity compare with On-the-Fly testing and whole digital simulation separately. Scene generation and projection two key technologies of hardware-in-the-loop testing of laser radar and is a complicated problem because the 3D images result from time delay. The scene generation process begins with the definition of the target geometry and reflectivity and range. The real-time 3D scene generation computer is a PC based hardware and the 3D target models were modeled using 3dsMAX. The scene generation software was written in C and OpenGL and is executed to extract the Z-buffer from the bit planes to main memory as range image. These pixels contain each target position x, y, z and its respective intensity and range value. Expensive optical injection technologies of scene projection such as LDP array, VCSEL array, DMD and associated scene generation is ongoing. But the optical scene projection is complicated and always unaffordable. In this paper a cheaper test facility was described that uses direct electronic injection to provide rang images for laser radar testing. The electronic delay and pulse shaping circuits inject the scenes directly into the seeker's signal processing unit.
Implementation of augmented reality to models sultan deli
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Lumbantobing, N. P.; Siregar, B.; Rahmat, R. F.; Andayani, U.
2018-03-01
Augmented reality is a technology that can provide visualization in the form of 3D virtual model. With the utilization of augmented reality technology hence image-based modeling to produce 3D model of Sultan Deli Istana Maimun can be applied to restore photo of Sultan of Deli into three dimension model. This is due to the Sultan of Deli which is one of the important figures in the history of the development of the city of Medan is less known by the public because the image of the Sultanate of Deli is less clear and has been very long. To achieve this goal, augmented reality applications are used with image processing methodologies into 3D models through several toolkits. The output generated from this method is the visitor’s photos Maimun Palace with 3D model of Sultan Deli with the detection of markers 20-60 cm apart so as to provide convenience for the public to recognize the Sultan Deli who had ruled in Maimun Palace.
Compressible magma/mantle dynamics: 3-D, adaptive simulations in ASPECT
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo
2016-12-01
Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different timescales make it difficult to study mantle convection and melt migration in a unified framework, especially for 3-D global models. And although experiments suggest an increase in melt volume of up to 20 per cent from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high-resolution, 3-D, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.
A smartphone photogrammetry method for digitizing prosthetic socket interiors.
Hernandez, Amaia; Lemaire, Edward
2017-04-01
Prosthetic CAD/CAM systems require accurate 3D limb models; however, difficulties arise when working from the person's socket since current 3D scanners have difficulties scanning socket interiors. While dedicated scanners exist, they are expensive and the cost may be prohibitive for a limited number of scans per year. A low-cost and accessible photogrammetry method for socket interior digitization is proposed, using a smartphone camera and cloud-based photogrammetry services. 15 two-dimensional images of the socket's interior are captured using a smartphone camera. A 3D model is generated using cloud-based software. Linear measurements were comparing between sockets and the related 3D models. 3D reconstruction accuracy averaged 2.6 ± 2.0 mm and 0.086 ± 0.078 L, which was less accurate than models obtained by high quality 3D scanners. However, this method would provide a viable 3D digital socket reproduction that is accessible and low-cost, after processing in prosthetic CAD software. Clinical relevance The described method provides a low-cost and accessible means to digitize a socket interior for use in prosthetic CAD/CAM systems, employing a smartphone camera and cloud-based photogrammetry software.
Pairwise domain adaptation module for CNN-based 2-D/3-D registration.
Zheng, Jiannan; Miao, Shun; Jane Wang, Z; Liao, Rui
2018-04-01
Accurate two-dimensional to three-dimensional (2-D/3-D) registration of preoperative 3-D data and intraoperative 2-D x-ray images is a key enabler for image-guided therapy. Recent advances in 2-D/3-D registration formulate the problem as a learning-based approach and exploit the modeling power of convolutional neural networks (CNN) to significantly improve the accuracy and efficiency of 2-D/3-D registration. However, for surgery-related applications, collecting a large clinical dataset with accurate annotations for training can be very challenging or impractical. Therefore, deep learning-based 2-D/3-D registration methods are often trained with synthetically generated data, and a performance gap is often observed when testing the trained model on clinical data. We propose a pairwise domain adaptation (PDA) module to adapt the model trained on source domain (i.e., synthetic data) to target domain (i.e., clinical data) by learning domain invariant features with only a few paired real and synthetic data. The PDA module is designed to be flexible for different deep learning-based 2-D/3-D registration frameworks, and it can be plugged into any pretrained CNN model such as a simple Batch-Norm layer. The proposed PDA module has been quantitatively evaluated on two clinical applications using different frameworks of deep networks, demonstrating its significant advantages of generalizability and flexibility for 2-D/3-D medical image registration when a small number of paired real-synthetic data can be obtained.
Teaching and Learning Structural Geology Using SketchUp
NASA Astrophysics Data System (ADS)
Rey, Patrice
2017-04-01
The books and maps we read, the posters we pin on our walls, the TV sets and computer monitors we spend hours watching, the white (or black) boards we use to teach, all reduce our world into planar images. As a result, and through years of oblivious practice, our brain is conditioned to understand the world in two dimensions (2D) only. As structural geologists, we know that the most challenging aspect of teaching and learning structural geology is that we need to be able to mentally manipulate 2D and three-dimensional (3D) objects. Although anyone can learn through practice the art of spatial visualisation, the fact remains that the initial stages of learning structural geology are for many students very challenging, as we naively use 2D images to teach 3D concepts. While interactive 3D holography is not far away, some inexpensive tools already exist allowing us to generate interactive computer images, the free rotation, scaling and manipulation of which can help students to quickly grasp the geometry and internal architecture of 3D objects. Recently, I have experimented with SketchUp (works on Mac and Windows). SketchUp was initially released in 2000 by @Last Software, as a 3D modelling tool for architects, designers and filmmakers. It was acquired by Google in 2006 to further the development of GoogleEarth. Google released SketchUp for free, and provided a portal named 3D Warehouse for users to share their models. Google sold SketchUp to Trimble Navigation in 2012, which added Extension Warehouse for users to distribute add-ons. SketchUp models can be exported in a number of formats including .dae (digital asset exchange) useful to embed interactive 3D models into iBooks and html5 documents, and .kmz (keyhole markup language zipped) to embed interactive 3D models and cross-sections into GoogleEarth. SketchUp models can be exported into 3D pdf through the add-on SimLab, and .stl for 3D printing through the add-on SketchUp STL. A free licence is available for students and educators (SketchUp Make), and a few hundred Euros will give you access to SketchUp Pro. Having the capacity to use 3D interactive sketches instead of static 2D images, and generate serial cross-sections through 3D structures, is a major step forward, which not only enhances students experience but also nurtures deeper learning. Explaining why on 2D sections upright folds can appear strongly asymmetric, or why a dextral fault can result in an apparent sinistral offset can be a very challenging thing to do. Tools like SketchUp can help make the learning process far more immediate and easier. My collection of 3D SketchUp models is available at: https://3dwarehouse.sketchup.com/user.html?id=1151977671192710697351083 See also interaction 3D model embedded into an eBook: https://itunes.apple.com/au/book/introduction-to-structural/id1085911016?mt=13
3D numerical investigation on landslide generated tsunamis around a conical island
NASA Astrophysics Data System (ADS)
Montagna, Francesca; Bellotti, Giorgio
2010-05-01
This paper presents numerical computations of tsunamis generated by subaerial and submerged landslides falling along the flank of a conical island. The study is inspired by the tsunamis that on 30th December 2002 attacked the coast of the volcanic island of Stromboli (South Tyrrhenian sea, Italy). In particular this paper analyzes the important feature of the lateral spreading of landside generated tsunamis and the associated flooding hazard. The numerical model used in this study is the full three dimensional commercial code FLOW-3D. The model has already been successfully used (Choi et al., 2007; 2008; Chopakatla et al, 2008) to study the interaction of waves and structures. In the simulations carried out in this work a particular feature of the code has been employed: the GMO (General Moving Object) algorithm. It allows to reproduce the interaction between moving objects, as a landslide, and the water. FLOW-3D has been firstly validated using available 3D experiments reproducing tsunamis generated by landslides at the flank of a conical island. The experiments have been carried out in the LIC laboratory of the Polytechnic of Bari, Italy (Di Risio et al., 2009). Numerical and experimental time series of run-up and sea level recorded at gauges located at the flanks of the island and offshore have been successfully compared. This analysis shows that the model can accurately represent the generation, the propagation and the inundation of landslide generated tsunamis and suggests the use of the numerical model as a tool for preparing inundation maps. At the conference we will present the validation of the model and parametric analyses aimed to investigate how wave properties depend on the landslide kinematic and on further parameters such as the landslide volume and shape, as well as the radius of the island. The expected final results of the research are precomputed inundation maps that depend on the characteristics of the landslide and of the island. Finally we will try to apply the code to a real life case i.e. the landslide tsunamis at the coast of the Stromboli island (Italy). SELECTED REFERENCES Choi, B.H. and D. C. Kim and E. Pelinovsky and S. B. Woo, 2007. Three dimensional simulation of tsunami run-up around conical island. Coastal Engineering 54,374 pp. 618-629. Chopakatla, S.C. and T.C. Lipmann and J.E. Richardson, 2008. Field verification of a computational fluid dynamics model for wave transformation and breaking in the surf zone. Journal of Waterway, Port, Coastal, and Ocean Engineering 134(2), pp. 71-80 Di Risio, M., P. De Girolamo, G. Bellotti, A. Panizzo, F. Aristodemo, M. G.Molfetta, and A. F. Petrillo (2009), Landslidegenerated tsunamis runup at the coast of a conical island: New physical model experiments. J. Geophys. Res., 114, C01009, doi:10.1029/2008JC004858 Flow Science, Inc, 2007. FLOW-3D User's Manual.
Capturing PM2.5 Emissions from 3D Printing via Nanofiber-based Air Filter.
Rao, Chengchen; Gu, Fu; Zhao, Peng; Sharmin, Nusrat; Gu, Haibing; Fu, Jianzhong
2017-09-04
This study investigated the feasibility of using polycaprolactone (PCL) nanofiber-based air filters to capture PM2.5 particles emitted from fused deposition modeling (FDM) 3D printing. Generation and aggregation of emitted particles were investigated under different testing environments. The results show that: (1) the PCL nanofiber membranes are capable of capturing particle emissions from 3D printing, (2) relative humidity plays a signification role in aggregation of the captured particles, (3) generation and aggregation of particles from 3D printing can be divided into four stages: the PM2.5 concentration and particles size increase slowly (first stage), small particles are continuously generated and their concentration increases rapidly (second stage), small particles aggregate into more large particles and the growth of concentration slows down (third stage), the PM2.5 concentration and particle aggregation sizes increase rapidly (fourth stage), and (4) the ultrafine particles denoted as "building unit" act as the fundamentals of the aggregated particles. This work has tremendous implications in providing measures for controlling the particle emissions from 3D printing, which would facilitate the extensive application of 3D printing. In addition, this study provides a potential application scenario for nanofiber-based air filters other than laboratory theoretical investigation.
NASA Astrophysics Data System (ADS)
Lounnas, Valère; Wedler, Henry B.; Newman, Timothy; Schaftenaar, Gijs; Harrison, Jason G.; Nepomuceno, Gabriella; Pemberton, Ryan; Tantillo, Dean J.; Vriend, Gert
2014-11-01
In molecular sciences, articles tend to revolve around 2D representations of 3D molecules, and sighted scientists often resort to 3D virtual reality software to study these molecules in detail. Blind and visually impaired (BVI) molecular scientists have access to a series of audio devices that can help them read the text in articles and work with computers. Reading articles published in this journal, though, is nearly impossible for them because they need to generate mental 3D images of molecules, but the article-reading software cannot do that for them. We have previously designed AsteriX, a web server that fully automatically decomposes articles, detects 2D plots of low molecular weight molecules, removes meta data and annotations from these plots, and converts them into 3D atomic coordinates. AsteriX-BVI goes one step further and converts the 3D representation into a 3D printable, haptic-enhanced format that includes Braille annotations. These Braille-annotated physical 3D models allow BVI scientists to generate a complete mental model of the molecule. AsteriX-BVI uses Molden to convert the meta data of quantum chemistry experiments into BVI friendly formats so that the entire line of scientific information that sighted people take for granted—from published articles, via printed results of computational chemistry experiments, to 3D models—is now available to BVI scientists too. The possibilities offered by AsteriX-BVI are illustrated by a project on the isomerization of a sterol, executed by the blind co-author of this article (HBW).
Ghafouri, Hamidreza; Ranjbar, Mohsen; Sakhteman, Amirhossein
2017-08-01
A great challenge in medicinal chemistry is to develop different methods for structural design based on the pattern of the previously synthesized compounds. In this study two different QSAR methods were established and compared for a series of piperidine acetylcholinesterase inhibitors. In one novel approach, PC-LS-SVM and PLS-LS-SVM was used for modeling 3D interaction descriptors, and in the other method the same nonlinear techniques were used to build QSAR equations based on field descriptors. Different validation methods were used to evaluate the models and the results revealed the more applicability and predictive ability of the model generated by field descriptors (Q 2 LOO-CV =1, R 2 ext =0.97). External validation criteria revealed that both methods can be used in generating reasonable QSAR models. It was concluded that due to ability of interaction descriptors in prediction of binding mode, using this approach can be implemented in future 3D-QSAR softwares. Copyright © 2017 Elsevier Ltd. All rights reserved.
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.
A Web-based Visualization System for Three Dimensional Geological Model using Open GIS
NASA Astrophysics Data System (ADS)
Nemoto, T.; Masumoto, S.; Nonogaki, S.
2017-12-01
A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.
NASA Astrophysics Data System (ADS)
Saldaña-Martínez, M. I.; Guzmán-González, J. V.; Barajas-González, O. G.; Guzman-Ramos, V.; García-Garza, A. K.; González-García, R. B.; García-Ramírez, M. A.
2017-03-01
It is quite common that patients with ligamentous ruptures, tendonitis, tenosynovitis or sprains are foreseen the use of ad hoc splints for a swift recovery. In this paper, we propose a rehabilitation split that is focused on upper-limb injuries. By considering that upper-limb patient shows a set of different characteristics, our proposal personalizes and prints the splint custom made though a digital model that is generated by a 3D commercial scanner. To fabricate the 3D scanned model the Stereolithography material (SLA) is considered due to the properties that this material offers. In order to complement the recovery process, an electronic system is implemented within the splint design. This system generates a set of pulses for a fix period of time that focuses mainly on a certain group of muscles to allow a fast recovery process known as Transcutaneous Electrical Nerve Stimulation Principle (TENS).
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.
Yang, Hao; Xu, Xiangyang; Neumann, Ingo
2014-11-19
Terrestrial laser scanning technology (TLS) is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM) model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.
2016-02-10
using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models
NASA Astrophysics Data System (ADS)
Rodríguez Miranda, Á.; Valle Melón, J. M.
2017-02-01
Three-dimensional models with photographic textures have become a usual product for the study and dissemination of elements of heritage. The interest for cultural heritage also includes evolution along time; therefore, apart from the 3D models of the current state, it is interesting to be able to generate models representing how they were in the past. To that end, it is necessary to resort to archive information corresponding to the moments that we want to visualize. This text analyses the possibilities of generating 3D models of surfaces with photographic textures from old collections of analog negatives coming from works of terrestrial stereoscopic photogrammetry of historic buildings. The case studies presented refer to the geometric documentation of a small hermitage (done in 1996) and two sections of a wall (year 2000). The procedure starts with the digitization of the film negatives and the processing of the images generated, after which a combination of different methods for 3D reconstruction and texture wrapping are applied: techniques working simultaneously with several images (such as the algorithms of Structure from Motion - SfM) and single image techniques (such as the reconstruction based on vanishing points). Then, the features of the obtained models are described according to the geometric accuracy, completeness and aesthetic quality. In this way, it is possible to establish the real applicability of the models in order to be useful for the aforementioned historical studies and dissemination purposes. The text also wants to draw attention to the importance of preserving the documentary heritage available in the collections of negatives in archival custody and to the increasing difficulty of using them due to: (1) problems of access and physical conservation, (2) obsolescence of the equipment for scanning and stereoplotting and (3) the fact that the software for processing digitized photographs is discontinued.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafer, Morgan W; Battaglia, D. J.; Unterberg, Ezekial A
A new tangential 2D Soft X-Ray Imaging System (SXRIS) is being designed to examine the edge magnetic island structure in the lower X-point region of DIII-D. A synthetic diagnostic calculation coupled to 3D emissivity estimates is used to generate phantom images. Phillips-Tikhonov regularization is used to invert the phantom images for comparison to the original emissivity model. Noise level, island size, and equilibrium accuracy are scanned to assess the feasibility of detecting edge island structures. Models of typical DIII-D discharges indicate integration times > 1 ms with accurate equilibrium reconstruction are needed for small island (< 3 cm) detection.
Sawicki, Piotr
2018-01-01
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679
Gabara, Grzegorz; Sawicki, Piotr
2018-03-06
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
Using Parameters of Dynamic Pulse Function for 3d Modeling in LOD3 Based on Random Textures
NASA Astrophysics Data System (ADS)
Alizadehashrafi, B.
2015-12-01
The pulse function (PF) is a technique based on procedural preprocessing system to generate a computerized virtual photo of the façade with in a fixed size square(Alizadehashrafi et al., 2009, Musliman et al., 2010). Dynamic Pulse Function (DPF) is an enhanced version of PF which can create the final photo, proportional to real geometry. This can avoid distortion while projecting the computerized photo on the generated 3D model(Alizadehashrafi and Rahman, 2013). The challenging issue that might be handled for having 3D model in LoD3 rather than LOD2, is the final aim that have been achieved in this paper. In the technique based DPF the geometries of the windows and doors are saved in an XML file schema which does not have any connections with the 3D model in LoD2 and CityGML format. In this research the parameters of Dynamic Pulse Functions are utilized via Ruby programming language in SketchUp Trimble to generate (exact position and deepness) the windows and doors automatically in LoD3 based on the same concept of DPF. The advantage of this technique is automatic generation of huge number of similar geometries e.g. windows by utilizing parameters of DPF along with defining entities and window layers. In case of converting the SKP file to CityGML via FME software or CityGML plugins the 3D model contains the semantic database about the entities and window layers which can connect the CityGML to MySQL(Alizadehashrafi and Baig, 2014). The concept behind DPF, is to use logical operations to project the texture on the background image which is dynamically proportional to real geometry. The process of projection is based on two vertical and horizontal dynamic pulses starting from upper-left corner of the background wall in down and right directions respectively based on image coordinate system. The logical one/zero on the intersections of two vertical and horizontal dynamic pulses projects/does not project the texture on the background image. It is possible to define priority for each layer. For instance the priority of the door layer can be higher than window layer which means that window texture cannot be projected on the door layer. Orthogonal and rectified perpendicular symmetric photos of the 3D objects that are proportional to the real façade geometry must be utilized for the generation of the output frame for DPF. The DPF produces very high quality and small data size of output image files in quite smaller dimension compare with the photorealistic texturing method. The disadvantage of DPF is its preprocessing method to generate output image file rather than online processing to generate the texture within the 3D environment such as CityGML. Furthermore the result of DPF can be utilized for 3D model in LOD2 rather than LOD3. In the current work the random textures of the window layers are created based on parameters of DPF within Ruby console of SketchUp Trimble to generate the deeper geometries of the windows and their exact position on the façade automatically along with random textures to increase Level of Realism (LoR)(Scarpino, 2010). As the output frame in DPF is proportional to real geometry (height and width of the façade) it is possible to query the XML database and convert them to units such as meter automatically. In this technique, the perpendicular terrestrial photo from the façade is rectified by employing projective transformation based on the frame which is in constrain proportion to real geometry. The rectified photos which are not suitable for texturing but necessary for measuring, can be resized in constrain proportion to real geometry before measuring process. Height and width of windows, doors, horizontal and vertical distance between windows from upper left corner of the photo dimensions of doors and windows are parameters that should be measured to run the program as a plugins in SketchUp Trimble. The system can use these parameters and texture file names and file paths to create the façade semi-automatically. To avoid leaning geometry the textures of windows, doors and etc, should be cropped and rectified from perpendicular photos, so that they can be used in the program to create the whole façade along with its geometries. Texture enhancement should be done in advance such as removing disturbing objects, exposure setting, left-right up-down transformation, and so on. In fact, the quality, small data size, scale and semantic database for each façade are the prominent advantages of this method.
Shading of a computer-generated hologram by zone plate modulation.
Kurihara, Takayuki; Takaki, Yasuhiro
2012-02-13
We propose a hologram calculation technique that enables reconstructing a shaded three-dimensional (3D) image. The amplitude distributions of zone plates, which generate the object points that constitute a 3D object, were two-dimensionally modulated. Two-dimensional (2D) amplitude modulation was determined on the basis of the Phong reflection model developed for computer graphics, which considers the specular, diffuse, and ambient reflection light components. The 2D amplitude modulation added variable and constant modulations: the former controlled the specular light component and the latter controlled the diffuse and ambient components. The proposed calculation technique was experimentally verified. The reconstructed image showed specular reflection that varied depending on the viewing position.
D Modelling of the Lusatian Borough in Biskupin Using Archival Data
NASA Astrophysics Data System (ADS)
Zawieska, D.; Markiewicz, J. S.; Kopiasz, J.; Tazbir, J.; Tobiasz, A.
2017-02-01
The paper presents the results of 3D modelling in the Lusatian Borough, Biskupin, using archival data. Pre-war photographs acquired from different heights, e.g., from a captive balloon (maximum height up to 150 m), from a blimp (at a height of 50-110 m) and from an aeroplane (at a height of 200 m, 300 m and up to 3 km). In order to generate 3D models, AgiSoft tools were applied, as they allow for restoring shapes using triangular meshes. Individual photographs were processed using Google SketchUp software and the "shape from shadow" method. The usefulness of these particular models in archaeological research work was also analysed.
Yan, Yuanwei; Bejoy, Julie; Xia, Junfei; Guan, Jingjiao; Zhou, Yi; Li, Yan
2016-09-15
Appropriate neural patterning of human induced pluripotent stem cells (hiPSCs) is critical to generate specific neural cells/tissues and even mini-brains that are physiologically relevant to model neurological diseases. However, the capacity of signaling factors that regulate 3-D neural tissue patterning in vitro and differential responses of the resulting neural populations to various biomolecules have not yet been fully understood. By tuning neural patterning of hiPSCs with small molecules targeting sonic hedgehog (SHH) signaling, this study generated different 3-D neuronal cultures that were mainly comprised of either cortical glutamatergic neurons or motor neurons. Abundant glutamatergic neurons were observed following the treatment with an antagonist of SHH signaling, cyclopamine, while Islet-1 and HB9-expressing motor neurons were enriched by an SHH agonist, purmorphamine. In neurons derived with different neural patterning factors, whole-cell patch clamp recordings showed similar voltage-gated Na(+)/K(+) currents, depolarization-evoked action potentials and spontaneous excitatory post-synaptic currents. Moreover, these different neuronal populations exhibited differential responses to three classes of biomolecules, including (1) matrix metalloproteinase inhibitors that affect extracellular matrix remodeling; (2) N-methyl-d-aspartate that induces general neurotoxicity; and (3) amyloid β (1-42) oligomers that cause neuronal subtype-specific neurotoxicity. This study should advance our understanding of hiPSC self-organization and neural tissue development and provide a transformative approach to establish 3-D models for neurological disease modeling and drug discovery. Appropriate neural patterning of human induced pluripotent stem cells (hiPSCs) is critical to generate specific neural cells, tissues and even mini-brains that are physiologically relevant to model neurological diseases. However, the capability of sonic hedgehog-related small molecules to tune different neuronal subtypes in 3-D differentiation from hiPSCs and the differential cellular responses of region-specific neuronal subtypes to various biomolecules have not been fully investigated. By tuning neural patterning of hiPSCs with small molecules targeting sonic hedgehog signaling, this study provides knowledge on the differential susceptibility of region-specific neuronal subtypes derived from hiPSCs to different biomolecules in extracellular matrix remodeling and neurotoxicity. The findings are significant for understanding 3-D neural patterning of hiPSCs for the applications in brain organoid formation, neurological disease modeling, and drug discovery. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2013-02-01
This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.
3D virtual character reconstruction from projections: a NURBS-based approach
NASA Astrophysics Data System (ADS)
Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.
2004-05-01
This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.
3D-printed orthodontic brackets - proof of concept.
Krey, Karl-Friedrich; Darkazanly, Nawras; Kühnert, Rolf; Ruge, Sebastian
Today, orthodontic treatment with fixed appliances is usually carried out using preprogrammed straight-wire brackets made of metal or ceramics. The goal of this study was to determine the possibility of clinically implementing a fully digital workflow with individually designed and three-dimensionally printed (3D-printed) brackets. Edgewise brackets were designed using computer-aided design (CAD) software for demonstration purposes. After segmentation of the malocclusion model generated based on intraoral scan data, the brackets were digitally positioned on the teeth and a target occlusion model created. The thus-defined tooth position was used to generate a template for an individualized arch form in the horizontal plane. The base contours of the brackets were modified to match the shape of the tooth surfaces, and a positioning guide (fabricated beforehand) was used to ensure that the brackets were bonded at the correct angle and position. The brackets, positioning guide, and retainer splint, digitally designed on the target occlusion model, were 3D printed using a Digital Light Processing (DLP) 3D printer. The archwires were individually pre-bent using the template. In the treatment sequence, it was shown for the first time that, in principle, it is possible to perform treatment with an individualized 3D-printed brackets system by using the proposed fully digital workflow. Technical aspects of the system, problems encountered in treatment, and possible future developments are discussed in this article.
NASA Astrophysics Data System (ADS)
Zidane, A.; Firoozabadi, A.
2017-12-01
We present an efficient and accurate numerical model for multicomponent compressible single-phase flow in 2D and 3D fractured media based on higher-order discretization. The numerical model accounts for heterogeneity and anisotropy in unstructured gridding with low mesh dependency. The efficiency of our model is demonstrated by having comparable CPU time between fractured and unfractured media. The fracture cross-flow equilibrium approach (FCFE) is applied on triangular finite elements (FE) in 2D. This allows simulating fractured reservoirs with all possible orientations of fractures as opposed to rectangular FE. In 3D we apply the FCFE approach on the prism FE. The prism FE with FCFE allows simulating realistic fractured domains compared to hexahedron FE. In addition, when using FCFE on triangular and prism FE there is no limitation on the number of intersecting fractures, whereas in rectangular and hexahedron FE the number is limited to 2 in 2D and 3 in 3D. To generate domains with complicated boundaries, we have developed a computer-aided design (CAD) interface in our model. The advances introduced in this work are demonstrated through various examples.
NASA Astrophysics Data System (ADS)
Di Giulio, R.; Maietti, F.; Piaia, E.; Medici, M.; Ferrari, F.; Turillazzi, B.
2017-02-01
The generation of high quality 3D models can be still very time-consuming and expensive, and the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed. This challenge is even more crucial for complex architectures and large heritage sites, which involve a large amount of data to be acquired, managed and enriched by metadata. In this framework, the ongoing EU funded project INCEPTION - Inclusive Cultural Heritage in Europe through 3D semantic modelling proposes a workflow aimed at the achievements of efficient 3D digitization methods, post-processing tools for an enriched semantic modelling, web-based solutions and applications to ensure a wide access to experts and non-experts. In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP) has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.
NASA Astrophysics Data System (ADS)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, T.A.
1998-11-01
The objectives of this task are to: Develop a model (paper) to estimate the cost and waste generation of cleanup within the Environmental Management (EM) complex; Identify technologies applicable to decontamination and decommissioning (D and D) operations within the EM complex; Develop a database of facility information as linked to project baseline summaries (PBSs). The above objectives are carried out through the following four subtasks: Subtask 1--D and D Model Development, Subtask 2--Technology List; Subtask 3--Facility Database, and Subtask 4--Incorporation into a User Model.
Sensitivity of Attitude Determination on the Model Assumed for ISAR Radar Mappings
NASA Astrophysics Data System (ADS)
Lemmens, S.; Krag, H.
2013-09-01
Inverse synthetic aperture radars (ISAR) are valuable instrumentations for assessing the state of a large object in low Earth orbit. The images generated by these radars can reach a sufficient quality to be used during launch support or contingency operations, e.g. for confirming the deployment of structures, determining the structural integrity, or analysing the dynamic behaviour of an object. However, the direct interpretation of ISAR images can be a demanding task due to the nature of the range-Doppler space in which these images are produced. Recently, a tool has been developed by the European Space Agency's Space Debris Office to generate radar mappings of a target in orbit. Such mappings are a 3D-model based simulation of how an ideal ISAR image would be generated by a ground based radar under given processing conditions. These radar mappings can be used to support a data interpretation process. E.g. by processing predefined attitude scenarios during an observation sequence and comparing them with actual observations, one can detect non-nominal behaviour. Vice versa, one can also estimate the attitude states of the target by fitting the radar mappings to the observations. It has been demonstrated for the latter use case that a coarse approximation of the target through an 3D-model is already sufficient to derive the attitude information from the generated mappings. The level of detail required for the 3D-model is determined by the process of generating ISAR images, which is based on the theory of scattering bodies. Therefore, a complex surface can return an intrinsically noisy ISAR image. E.g. when many instruments on a satellite are visible to the observer, the ISAR image can suffer from multipath reflections. In this paper, we will further analyse the sensitivity of the attitude fitting algorithms to variations in the dimensions and the level of detail of the underlying 3D model. Moreover, we investigate the ability to estimate the orientations of different spacecraft components with respect to each other from the fitting procedure.
Improving Perceptual Skills with 3-Dimensional Animations.
ERIC Educational Resources Information Center
Johns, Janet Faye; Brander, Julianne Marie
1998-01-01
Describes three-dimensional computer aided design (CAD) models for every component in a representative mechanical system; the CAD models made it easy to generate 3-D animations that are ideal for teaching perceptual skills in multimedia computer-based technical training. Fifteen illustrations are provided. (AEF)
Practical computational toolkits for dendrimers and dendrons structure design.
Martinho, Nuno; Silva, Liana C; Florindo, Helena F; Brocchini, Steve; Barata, Teresa; Zloh, Mire
2017-09-01
Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.
Practical computational toolkits for dendrimers and dendrons structure design
NASA Astrophysics Data System (ADS)
Martinho, Nuno; Silva, Liana C.; Florindo, Helena F.; Brocchini, Steve; Barata, Teresa; Zloh, Mire
2017-09-01
Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Clark, R. E.; Thoma, C.; Zimmerman, W. R.; Bruner, N.; Rambo, P. K.; Atherton, B. W.
2011-09-01
Streamer and leader formation in high pressure devices is dynamic process involving a broad range of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. Accurate modeling of these physical processes is essential for a number of applications, including high-current, laser-triggered gas switches. Towards this end, we present a new 3D implicit particle-in-cell simulation model of gas breakdown leading to streamer formation in electronegative gases. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge [D. R. Welch, T. C. Genoni, R. E. Clark, and D. V. Rose, J. Comput. Phys. 227, 143 (2007)]. The simulation model is fully electromagnetic, making it capable of following, for example, the evolution of a gas switch from the point of laser-induced localized breakdown of the gas between electrodes through the successive stages of streamer propagation, initial electrode current connection, and high-current conduction channel evolution, where self-magnetic field effects are likely to be important. We describe the model details and underlying assumptions used and present sample results from 3D simulations of streamer formation and propagation in SF6.
3D Visualization Development of SIUE Campus
NASA Astrophysics Data System (ADS)
Nellutla, Shravya
Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.
Geological evolution of the North Sea: a dynamic 3D model including petroleum system elements
NASA Astrophysics Data System (ADS)
Sabine, Heim; Rüdiger, Lutz; Dirk, Kaufmann; Lutz, Reinhardt
2013-04-01
This study investigates the sedimentary basin evolution of the German North Sea with a focus on petroleum generation, migration and accumulation. The study is conducted within the framework of the project "Geoscientific Potential of the German North Sea (GPDN)", a joint project of federal (BGR, BSH) and state authorities (LBEG) with partners from industry and scientific institutions. Based on the structural model of the "Geotektonischer Atlas 3D" (GTA3D, LBEG), this dynamic 3D model contains additionally the northwestern part ("Entenschnabel" area) of the German North Sea. Geological information, e.g. lithostratigraphy, facies and structural data, provided by industry, was taken from published research projects, or literature data such as the Southern Permian Basin Atlas (SPBA; Doornenbal et al., 2010). Numerical modeling was carried out for a sedimentary succession containing 17 stratigraphic layers and several sublayers, representing the sedimentary deposition from the Devonian until Present. Structural details have been considered in terms of simplified faults and salt structures, as well as main erosion and salt movement events. Lithology, facies and the boundary conditions e.g. heat flow, paleo water-depth and sediment water interface temperature were assigned. The system calibration is based on geochemical and petrological data, such as maturity of organic matter (VRr) and present day temperature. Due to the maturity of the sedimentary organic matter Carboniferous layers are the major source rocks for gas generation. Main reservoir rocks are the Rotliegend sandstones, furthermore, sandstones of the Lower Triassic and Jurassic can serve as reservoir rocks in areas where the Zechstein salts are absent. The model provides information on the temperature and maturity distribution within the main source rock layers as well as information of potential hydrocarbon generation based on kinetic data for gas liberation. Finally, this dynamic 3D model offers a first interpretation of the current data base and an estimation of the structural- and burial evolution of the German North Sea area, including information on the petroleum system elements. It includes information about possible migration pathways, oil and gas accumulations, as well as the type of generated hydrocarbons and non-hydrocarbons. References: Doornenbal, J.C. and Stevenson, A.G. (editors), 2010. Petroleum Geological Atlas of the Southern Permian Basin Area. EAGE Publications b.v. (Houten).
NASA Astrophysics Data System (ADS)
Bolick, Leslie; Harguess, Josh
2016-05-01
An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.
The Implications of 3D Thermal Structure on 1D Atmospheric Retrieval
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Dobbs-Dixon, Ian; Greene, Thomas
2017-10-01
Using the atmospheric structure from a 3D global radiation-hydrodynamic simulation of HD 189733b and the open-source Bayesian Atmospheric Radiative Transfer (BART) code, we investigate the difference between the secondary-eclipse temperature structure produced with a 3D simulation and the best-fit 1D retrieved model. Synthetic data are generated by integrating the 3D models over the Spitzer, the Hubble Space Telescope (HST), and the James Web Space Telescope (JWST) bandpasses, covering the wavelength range between 1 and 11 μm where most spectroscopically active species have pronounced features. Using the data from different observing instruments, we present detailed comparisons between the temperature-pressure profiles recovered by BART and those from the 3D simulations. We calculate several averages of the 3D thermal structure and explore which particular thermal profile matches the retrieved temperature structure. We implement two temperature parameterizations that are commonly used in retrieval to investigate different thermal profile shapes. To assess which part of the thermal structure is best constrained by the data, we generate contribution functions for our theoretical model and each of our retrieved models. Our conclusions are strongly affected by the spectral resolution of the instruments included, their wavelength coverage, and the number of data points combined. We also see some limitations in each of the temperature parametrizations, as they are not able to fully match the complex curvatures that are usually produced in hydrodynamic simulations. The results show that our 1D retrieval is recovering a temperature and pressure profile that most closely matches the arithmetic average of the 3D thermal structure. When we use a higher resolution, more data points, and a parametrized temperature profile that allows more flexibility in the middle part of the atmosphere, we find a better match between the retrieved temperature and pressure profile and the arithmetic average. The Spitzer and HST simulated observations sample deep parts of the planetary atmosphere and provide fewer constraints on the temperature and pressure profile, while the JWST observations sample the middle part of the atmosphere, providing a good match with the middle and most complex part of the arithmetic average of the 3D temperature structure.
Amaral, Robson L F; Miranda, Mariza; Marcato, Priscyla D; Swiech, Kamilla
2017-01-01
Introduction: Cell-based assays using three-dimensional (3D) cell cultures may reflect the antitumor activity of compounds more accurately, since these models reproduce the tumor microenvironment better. Methods: Here, we report a comparative analysis of cell behavior in the two most widely employed methods for 3D spheroid culture, forced floating (Ultra-low Attachment, ULA, plates), and hanging drop (HD) methods, using the RT4 human bladder cancer cell line as a model. The morphology parameters and growth/metabolism of the spheroids generated were first characterized, using four different cell-seeding concentrations (0.5, 1.25, 2.5, and 3.75 × 10 4 cells/mL), and then, subjected to drug resistance evaluation. Results: Both methods generated spheroids with a smooth surface and round shape in a spheroidization time of about 48 h, regardless of the cell-seeding concentration used. Reduced cell growth and metabolism was observed in 3D cultures compared to two-dimensional (2D) cultures. The optimal range of spheroid diameter (300-500 μm) was obtained using cultures initiated with 0.5 and 1.25 × 10 4 cells/mL for the ULA method and 2.5 and 3.75 × 10 4 cells/mL for the HD method. RT4 cells cultured under 3D conditions also exhibited a higher resistance to doxorubicin (IC 50 of 1.00 and 0.83 μg/mL for the ULA and HD methods, respectively) compared to 2D cultures (IC 50 ranging from 0.39 to 0.43). Conclusions: Comparing the results, we concluded that the forced floating method using ULA plates was considered more suitable and straightforward to generate RT4 spheroids for drug screening/cytotoxicity assays. The results presented here also contribute to the improvement in the standardization of the 3D cultures required for widespread application.
Amaral, Robson L. F.; Miranda, Mariza; Marcato, Priscyla D.; Swiech, Kamilla
2017-01-01
Introduction: Cell-based assays using three-dimensional (3D) cell cultures may reflect the antitumor activity of compounds more accurately, since these models reproduce the tumor microenvironment better. Methods: Here, we report a comparative analysis of cell behavior in the two most widely employed methods for 3D spheroid culture, forced floating (Ultra-low Attachment, ULA, plates), and hanging drop (HD) methods, using the RT4 human bladder cancer cell line as a model. The morphology parameters and growth/metabolism of the spheroids generated were first characterized, using four different cell-seeding concentrations (0.5, 1.25, 2.5, and 3.75 × 104 cells/mL), and then, subjected to drug resistance evaluation. Results: Both methods generated spheroids with a smooth surface and round shape in a spheroidization time of about 48 h, regardless of the cell-seeding concentration used. Reduced cell growth and metabolism was observed in 3D cultures compared to two-dimensional (2D) cultures. The optimal range of spheroid diameter (300–500 μm) was obtained using cultures initiated with 0.5 and 1.25 × 104 cells/mL for the ULA method and 2.5 and 3.75 × 104 cells/mL for the HD method. RT4 cells cultured under 3D conditions also exhibited a higher resistance to doxorubicin (IC50 of 1.00 and 0.83 μg/mL for the ULA and HD methods, respectively) compared to 2D cultures (IC50 ranging from 0.39 to 0.43). Conclusions: Comparing the results, we concluded that the forced floating method using ULA plates was considered more suitable and straightforward to generate RT4 spheroids for drug screening/cytotoxicity assays. The results presented here also contribute to the improvement in the standardization of the 3D cultures required for widespread application. PMID:28878686
NASA Astrophysics Data System (ADS)
Partovi, T.; Fraundorfer, F.; Azimi, S.; Marmanis, D.; Reinartz, P.
2017-05-01
3D building reconstruction from remote sensing image data from satellites is still an active research topic and very valuable for 3D city modelling. The roof model is the most important component to reconstruct the Level of Details 2 (LoD2) for a building in 3D modelling. While the general solution for roof modelling relies on the detailed cues (such as lines, corners and planes) extracted from a Digital Surface Model (DSM), the correct detection of the roof type and its modelling can fail due to low quality of the DSM generated by dense stereo matching. To reduce dependencies of roof modelling on DSMs, the pansharpened satellite images as a rich resource of information are used in addition. In this paper, two strategies are employed for roof type classification. In the first one, building roof types are classified in a state-of-the-art supervised pre-trained convolutional neural network (CNN) framework. In the second strategy, deep features from deep layers of different pre-trained CNN model are extracted and then an RBF kernel using SVM is employed to classify the building roof type. Based on roof complexity of the scene, a roof library including seven types of roofs is defined. A new semi-automatic method is proposed to generate training and test patches of each roof type in the library. Using the pre-trained CNN model does not only decrease the computation time for training significantly but also increases the classification accuracy.
Gümrükçü, Zeynep; Korkmaz, Yavuz Tolga; Korkmaz, Fatih Mehmet
2017-07-01
The purpose of this study is to evaluate and compare bone stress that occurs as a result of using vertical implants with simultaneous sinus augmentation with bone stress generated from oblique implants without sinus augmentation in atrophic maxilla. Six, three-dimensional (3D) finite element (FE) models of atrophic maxilla were generated with SolidWorks software. The maxilla models were varied for two different bone types. Models 2a, 2b and 2c represent maxilla models with D2 bone type. Models 3a, 3b and 3c represent maxilla models with D3 bone type. Five implants were embedded in each model with different configurations for vertical implant insertion with sinus augmentation: Model 2a/Model 3a, 30° tilted insertion; Model 2b/Model 3b and 45° tilted insertion; Model 2c/Model 3c. A 150 N load was applied obliquely on the hybrid prosthesis. The maximum von Mises stress values were comparatively evaluated using color scales. The von Mises stress values predicted by the FE models were higher for all D3 bone models in both cortical and cancellous bone. For the vertical implant models, lower stress values were found in cortical bone. Tilting of the distal implants by 30° increased the stress in the cortical layer compared to vertical implant models. Tilting of the distal implant by 45° decreased the stress in the cortical bone compared to the 30° models, but higher stress values were detected in the 45° models compared to the vertical implant models. Augmentation should be the first treatment option in atrophic maxilla in terms of biomechanics. Tilted posterior implants can create higher stress values than vertical posterior implants. During tilting implant planning, the use of a 45° tilted implant results in better biomechanical performance in peri-implant bone than 30° tilted implant due to the decrease in cantilever length. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images
Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade
2016-01-01
Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874
Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen
The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the presence of a peridevice leak. 3D printing of the LAA using real-time 3D transesophageal echocardiographic data has a perfect and rapid application in LAA occlusion to assist with physician planning and decision making. © 2016 S. Karger AG, Basel.
The Production of 3D Tumor Spheroids for Cancer Drug Discovery
Sant, Shilpa; Johnston, Paul A.
2017-01-01
New cancer drug approval rates are ≤ 5% despite significant investments in cancer research, drug discovery and development. One strategy to improve the rate of success of new cancer drugs transitioning into the clinic would be to more closely align the cellular models used in the early lead discovery with pre-clinical animal models and patient tumors. For solid tumors, this would mandate the development and implementation of three dimensional (3D) in vitro tumor models that more accurately recapitulate human solid tumor architecture and biology. Recent advances in tissue engineering and regenerative medicine have provided new techniques for 3D spheroid generation and a variety of in vitro 3D cancer models are being explored for cancer drug discovery. Although homogeneous assay methods and high content imaging approaches to assess tumor spheroid morphology, growth and viability have been developed, the implementation of 3D models in HTS remains challenging due to reasons that we discuss in this review. Perhaps the biggest obstacle to achieve acceptable HTS assay performance metrics occurs in 3D tumor models that produce spheroids with highly variable morphologies and/or sizes. We highlight two methods that produce uniform size-controlled 3D multicellular tumor spheroids that are compatible with cancer drug research and HTS; tumor spheroids formed in ultra-low attachment microplates, or in polyethylene glycol dimethacrylate hydrogel microwell arrays. PMID:28647083
Meshing of a Spiral Bevel Gearset with 3D Finite Element Analysis
NASA Technical Reports Server (NTRS)
Bibel, George D.; Handschuh, Robert
1996-01-01
Recent advances in spiral bevel gear geometry and finite element technology make it practical to conduct a structural analysis and analytically roll the gearset through mesh. With the advent of user specific programming linked to 3D solid modelers and mesh generators, model generation has become greatly automated. Contact algorithms available in general purpose finite element codes eliminate the need for the use and alignment of gap elements. Once the gearset is placed in mesh, user subroutines attached to the FE code easily roll the gearset through mesh. The method is described in detail. Preliminary results for a gearset segment showing the progression of the contact lineload is given as the gears roll through mesh.
NASA Astrophysics Data System (ADS)
Kozak, J.; Gulbinowicz, D.; Gulbinowicz, Z.
2009-05-01
The need for complex and accurate three dimensional (3-D) microcomponents is increasing rapidly for many industrial and consumer products. Electrochemical machining process (ECM) has the potential of generating desired crack-free and stress-free surfaces of microcomponents. This paper reports a study of pulse electrochemical micromachining (PECMM) using ultrashort (nanoseconds) pulses for generating complex 3-D microstructures of high accuracy. A mathematical model of the microshaping process with taking into consideration unsteady phenomena in electrical double layer has been developed. The software for computer simulation of PECM has been developed and the effects of machining parameters on anodic localization and final shape of machined surface are presented.
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
Improving Visibility of Stereo-Radiographic Spine Reconstruction with Geometric Inferences.
Kumar, Sampath; Nayak, K Prabhakar; Hareesha, K S
2016-04-01
Complex deformities of the spine, like scoliosis, are evaluated more precisely using stereo-radiographic 3D reconstruction techniques. Primarily, it uses six stereo-corresponding points available on the vertebral body for the 3D reconstruction of each vertebra. The wireframe structure obtained in this process has poor visualization, hence difficult to diagnose. In this paper, a novel method is proposed to improve the visibility of this wireframe structure using a deformation of a generic spine model in accordance with the 3D-reconstructed corresponding points. Then, the geometric inferences like vertebral orientations are automatically extracted from the radiographs to improve the visibility of the 3D model. Biplanar radiographs are acquired from five scoliotic subjects on a specifically designed calibration bench. The stereo-corresponding point reconstruction method is used to build six-point wireframe vertebral structures and thus the entire spine model. Using the 3D spine midline and automatically extracted vertebral orientation features, a more realistic 3D spine model is generated. To validate the method, the 3D spine model is back-projected on biplanar radiographs and the error difference is computed. Though, this difference is within the error limits available in the literature, the proposed work is simple and economical. The proposed method does not require more corresponding points and image features to improve the visibility of the model. Hence, it reduces the computational complexity. Expensive 3D digitizer and vertebral CT scan models are also excluded from this study. Thus, the visibility of stereo-corresponding point reconstruction is improved to obtain a low-cost spine model for a better diagnosis of spinal deformities.
Geospatial database for heritage building conservation
NASA Astrophysics Data System (ADS)
Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.
2014-02-01
Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.
Cognitive Virtualization: Combining Cognitive Models and Virtual Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuan Q. Tran; David I. Gertman; Donald D. Dudenhoeffer
2007-08-01
3D manikins are often used in visualizations to model human activity in complex settings. Manikins assist in developing understanding of human actions, movements and routines in a variety of different environments representing new conceptual designs. One such environment is a nuclear power plant control room, here they have the potential to be used to simulate more precise ergonomic assessments of human work stations. Next generation control rooms will pose numerous challenges for system designers. The manikin modeling approach by itself, however, may be insufficient for dealing with the desired technical advancements and challenges of next generation automated systems. Uncertainty regardingmore » effective staffing levels; and the potential for negative human performance consequences in the presence of advanced automated systems (e.g., reduced vigilance, poor situation awareness, mistrust or blind faith in automation, higher information load and increased complexity) call for further research. Baseline assessment of novel control room equipment(s) and configurations needs to be conducted. These design uncertainties can be reduced through complementary analysis that merges ergonomic manikin models with models of higher cognitive functions, such as attention, memory, decision-making, and problem-solving. This paper will discuss recent advancements in merging a theoretical-driven cognitive modeling framework within a 3D visualization modeling tool to evaluate of next generation control room human factors and ergonomic assessment. Though this discussion primary focuses on control room design, the application for such a merger between 3D visualization and cognitive modeling can be extended to various areas of focus such as training and scenario planning.« less
Elliptic surface grid generation on minimal and parmetrized surfaces
NASA Technical Reports Server (NTRS)
Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.
1995-01-01
An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.
3DNOW: Image-Based 3d Reconstruction and Modeling via Web
NASA Astrophysics Data System (ADS)
Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.
2018-05-01
This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.
NASA Astrophysics Data System (ADS)
Paramonov, Guennaddi K.; Saalfrank, Peter
2018-05-01
The non-Born-Oppenheimer quantum dynamics of p p μ and p d μ molecular ions excited by ultrashort, superintense VUV laser pulses polarized along the molecular axis (z ) is studied by the numerical solution of the time-dependent Schrödinger equation within a three-dimensional (3D) model, including the internuclear distance R and muon coordinates z and ρ , a transversal degree of freedom. It is shown that in both p p μ and p d μ , muons approximately follow the applied laser field out of phase. After the end of the laser pulse, expectation values
Engineering cancer microenvironments for in vitro 3-D tumor models
Asghar, Waseem; El Assal, Rami; Shafiee, Hadi; Pitteri, Sharon; Paulmurugan, Ramasamy; Demirci, Utkan
2017-01-01
The natural microenvironment of tumors is composed of extracellular matrix (ECM), blood vasculature, and supporting stromal cells. The physical characteristics of ECM as well as the cellular components play a vital role in controlling cancer cell proliferation, apoptosis, metabolism, and differentiation. To mimic the tumor microenvironment outside the human body for drug testing, two-dimensional (2-D) and murine tumor models are routinely used. Although these conventional approaches are employed in preclinical studies, they still present challenges. For example, murine tumor models are expensive and difficult to adopt for routine drug screening. On the other hand, 2-D in vitro models are simple to perform, but they do not recapitulate natural tumor microenvironment, because they do not capture important three-dimensional (3-D) cell–cell, cell–matrix signaling pathways, and multi-cellular heterogeneous components of the tumor microenvironment such as stromal and immune cells. The three-dimensional (3-D) in vitro tumor models aim to closely mimic cancer microenvironments and have emerged as an alternative to routinely used methods for drug screening. Herein, we review recent advances in 3-D tumor model generation and highlight directions for future applications in drug testing. PMID:28458612
Biological and medical applications of a brain-on-a-chip
2016-01-01
The desire to develop and evaluate drugs as potential countermeasures for biological and chemical threats requires test systems that can also substitute for the clinical trials normally crucial for drug development. Current animal models have limited predictivity for drug efficacy in humans as the large majority of drugs fails in clinical trials. We have limited understanding of the function of the central nervous system and the complexity of the brain, especially during development and neuronal plasticity. Simple in vitro systems do not represent physiology and function of the brain. Moreover, the difficulty of studying interactions between human genetics and environmental factors leads to lack of knowledge about the events that induce neurological diseases. Microphysiological systems (MPS) promise to generate more complex in vitro human models that better simulate the organ’s biology and function. MPS combine different cell types in a specific three-dimensional (3D) configuration to simulate organs with a concrete function. The final aim of these MPS is to combine different “organoids” to generate a human-on-a-chip, an approach that would allow studies of complex physiological organ interactions. The recent discovery of induced pluripotent stem cells (iPSCs) gives a range of possibilities allowing cellular studies of individuals with different genetic backgrounds (e.g., human disease models). Application of iPSCs from different donors in MPS gives the opportunity to better understand mechanisms of the disease and can be a novel tool in drug development, toxicology, and medicine. In order to generate a brain-on-a-chip, we have established a 3D model from human iPSCs based on our experience with a 3D rat primary aggregating brain model. After four weeks of differentiation, human 3D aggregates stain positive for different neuronal markers and show higher gene expression of various neuronal differentiation markers compared to 2D cultures. Here we present the applications and challenges of this emerging technology. PMID:24912505
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; Wang, Cong; Winterfeld, Philip
An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less
Jo, Junghyun; Xiao, Yixin; Sun, Alfred Xuyang; Cukuroglu, Engin; Tran, Hoang-Dai; Göke, Jonathan; Tan, Zi Ying; Saw, Tzuen Yih; Tan, Cheng-Peow; Lokman, Hidayat; Lee, Younghwan; Kim, Donghoon; Ko, Han Seok; Kim, Seong-Oh; Park, Jae Hyeon; Cho, Nam-Joon; Hyde, Thomas M; Kleinman, Joel E; Shin, Joo Heon; Weinberger, Daniel R; Tan, Eng King; Je, Hyunsoo Shawn; Ng, Huck-Hui
2016-08-04
Recent advances in 3D culture systems have led to the generation of brain organoids that resemble different human brain regions; however, a 3D organoid model of the midbrain containing functional midbrain dopaminergic (mDA) neurons has not been reported. We developed a method to differentiate human pluripotent stem cells into a large multicellular organoid-like structure that contains distinct layers of neuronal cells expressing characteristic markers of human midbrain. Importantly, we detected electrically active and functionally mature mDA neurons and dopamine production in our 3D midbrain-like organoids (MLOs). In contrast to human mDA neurons generated using 2D methods or MLOs generated from mouse embryonic stem cells, our human MLOs produced neuromelanin-like granules that were structurally similar to those isolated from human substantia nigra tissues. Thus our MLOs bearing features of the human midbrain may provide a tractable in vitro system to study the human midbrain and its related diseases. Copyright © 2016 Elsevier Inc. All rights reserved.
4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties
NASA Astrophysics Data System (ADS)
Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.
2018-05-01
4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated the most biased parametric maps. Inclusion of a temporal roughness penalty function improved the performance of 4D reconstruction based on the cubic B-spline, spectral and spline-residue models.
Defect modelling in an interactive 3-D CAD environment
NASA Astrophysics Data System (ADS)
Reilly, D.; Potts, A.; McNab, A.; Toft, M.; Chapman, R. K.
2000-05-01
This paper describes enhancement of the NDT Workbench, as presented at QNDE '98, to include theoretical models for the ultrasonic inspection of smooth planar defects, developed by British Energy and BNFL-Magnox Generation. The Workbench is a PC-based software package for the reconstruction, visualization and analysis of 3-D ultrasonic NDT data in an interactive CAD environment. This extension of the Workbeach now provides the user with a well established modelling approach, coupled with a graphical user interface for: a) configuring the model for flaw size, shape, orientation and location; b) flexible specification of probe parameters; c) selection of scanning surface and scan pattern on the CAD component model; d) presentation of the output as a simulated ultrasound image within the component, or as graphical or tabular displays. The defect modelling facilities of the Workbench can be used for inspection procedure assessment and confirmation of data interpretation, by comparison of overlay images generated from real and simulated data. The modelling technique currently implemented is based on the Geometrical Theory of Diffraction, for simulation of strip-like, circular or elliptical crack responses in the time harmonic or time dependent cases. Eventually, the Workbench will also allow modelling using elastodynamic Kirchhoff theory.
Guo, Hong-Chang; Wang, Yang; Dai, Jiang; Ren, Chang-Wei; Li, Jin-Hua; Lai, Yong-Qiang
2018-02-01
The aim of this study was to evaluate the effect of 3-dimensional (3D) printing in treatment of hypertrophic obstructive cardiomyopathy (HOCM) and its roles in doctor-patient communication. 3D-printed models were constructed preoperatively and postoperatively in seven HOCM patients received surgical treatment. Based on multi-slice computed tomography (CT) images, regions of disorder were segmented using the Mimics 19.0 software (Materialise, Leuven, Belgium). After generating an STL-file (StereoLithography file) with patients' data, the 3D printer (Objet350 Connex3, Stratasys Ltd., USA) created a 3D model. The pre- and post-operative 3D-printed models were used to make the surgical plan preoperatively and evaluate the outcome postoperatively. Meanwhile, a questionnaire was designed for patients and their relatives to learn the effectiveness of the 3D-printed prototypes in the preoperative conversations. The heart anatomies were accurately printed with 3D technology. The 3D-printed prototypes were useful for preoperative evaluation, surgical planning, and practice. Preoperative and postoperative echocardiographic evaluation showed left ventricular outflow tract (LVOT) obstruction was adequately relieved (82.71±31.63 to 14.91±6.89 mmHg, P<0.001), the septal thickness was reduced from 21.57±4.65 to 17.42±5.88 mm (P<0.001), and the SAM disappeared completely after the operation. Patients highly appreciated the role of 3D model in preoperative conversations and the communication score was 9.11±0.38 points. A 3D-printed model is a useful tool in individualized planning for myectomies and represent a useful tool for physician-patient communication.
A two-level generative model for cloth representation and shape from shading.
Han, Feng; Zhu, Song-Chun
2007-07-01
In this paper, we present a two-level generative model for representing the images and surface depth maps of drapery and clothes. The upper level consists of a number of folds which will generate the high contrast (ridge) areas with a dictionary of shading primitives (for 2D images) and fold primitives (for 3D depth maps). These primitives are represented in parametric forms and are learned in a supervised learning phase using 3D surfaces of clothes acquired through photometric stereo. The lower level consists of the remaining flat areas which fill between the folds with a smoothness prior (Markov random field). We show that the classical ill-posed problem-shape from shading (SFS) can be much improved by this two-level model for its reduced dimensionality and incorporation of middle-level visual knowledge, i.e., the dictionary of primitives. Given an input image, we first infer the folds and compute a sketch graph using a sketch pursuit algorithm as in the primal sketch [10], [11]. The 3D folds are estimated by parameter fitting using the fold dictionary and they form the "skeleton" of the drapery/cloth surfaces. Then, the lower level is computed by conventional SFS method using the fold areas as boundary conditions. The two levels interact at the final stage by optimizing a joint Bayesian posterior probability on the depth map. We show a number of experiments which demonstrate more robust results in comparison with state-of-the-art work. In a broader scope, our representation can be viewed as a two-level inhomogeneous MRF model which is applicable to general shape-from-X problems. Our study is an attempt to revisit Marr's idea [23] of computing the 2(1/2)D sketch from primal sketch. In a companion paper [2], we study shape from stereo based on a similar two-level generative sketch representation.
3D modeling of electric fields in the LUX detector
NASA Astrophysics Data System (ADS)
Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Druszkiewicz, E.; Edwards, B. N.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.
2017-11-01
This work details the development of a three-dimensional (3D) electric field model for the LUX detector. The detector took data to search for weakly interacting massive particles (WIMPs) during two periods. After the first period completed, a time-varying non-uniform negative charge developed in the polytetrafluoroethylene (PTFE) panels that define the radial boundary of the detector's active volume. This caused electric field variations in the detector in time, depth and azimuth, generating an electrostatic radially-inward force on electrons on their way upward to the liquid surface. To map this behavior, 3D electric field maps of the detector's active volume were generated on a monthly basis. This was done by fitting a model built in COMSOL Multiphysics to the uniformly distributed calibration data that were collected on a regular basis. The modeled average PTFE charge density increased over the course of the exposure from -3.6 to -5.5 μC/m2. From our studies, we deduce that the electric field magnitude varied locally while the mean value of the field of ~200 V/cm remained constant throughout the exposure. As a result of this work the varying electric fields and their impact on event reconstruction and discrimination were successfully modeled.
Jerath, Ravinder; Crawford, Molly W.; Barnes, Vernon A.
2015-01-01
The Global Workspace Theory and Information Integration Theory are two of the most currently accepted consciousness models; however, these models do not address many aspects of conscious experience. We compare these models to our previously proposed consciousness model in which the thalamus fills-in processed sensory information from corticothalamic feedback loops within a proposed 3D default space, resulting in the recreation of the internal and external worlds within the mind. This 3D default space is composed of all cells of the body, which communicate via gap junctions and electrical potentials to create this unified space. We use 3D illustrations to explain how both visual and non-visual sensory information may be filled-in within this dynamic space, creating a unified seamless conscious experience. This neural sensory memory space is likely generated by baseline neural oscillatory activity from the default mode network, other salient networks, brainstem, and reticular activating system. PMID:26379573
Grossberg, Stephen
2016-01-01
The FACADE model, and its laminar cortical realization and extension in the 3D LAMINART model, have explained, simulated, and predicted many perceptual and neurobiological data about how the visual cortex carries out 3D vision and figure-ground perception, and how these cortical mechanisms enable 2D pictures to generate 3D percepts of occluding and occluded objects. In particular, these models have proposed how border ownership occurs, but have not yet explicitly explained the correlation between multiple properties of border ownership neurons in cortical area V2 that were reported in a remarkable series of neurophysiological experiments by von der Heydt and his colleagues; namely, border ownership, contrast preference, binocular stereoscopic information, selectivity for side-of-figure, Gestalt rules, and strength of attentional modulation, as well as the time course during which such properties arise. This article shows how, by combining 3D LAMINART properties that were discovered in two parallel streams of research, a unified explanation of these properties emerges. This explanation proposes, moreover, how these properties contribute to the generation of consciously seen 3D surfaces. The first research stream models how processes like 3D boundary grouping and surface filling-in interact in multiple stages within and between the V1 interblob—V2 interstripe—V4 cortical stream and the V1 blob—V2 thin stripe—V4 cortical stream, respectively. Of particular importance for understanding figure-ground separation is how these cortical interactions convert computationally complementary boundary and surface mechanisms into a consistent conscious percept, including the critical use of surface contour feedback signals from surface representations in V2 thin stripes to boundary representations in V2 interstripes. Remarkably, key figure-ground properties emerge from these feedback interactions. The second research stream shows how cells that compute absolute disparity in cortical area V1 are transformed into cells that compute relative disparity in cortical area V2. Relative disparity is a more invariant measure of an object's depth and 3D shape, and is sensitive to figure-ground properties. PMID:26858665
Grossberg, Stephen
2015-01-01
The FACADE model, and its laminar cortical realization and extension in the 3D LAMINART model, have explained, simulated, and predicted many perceptual and neurobiological data about how the visual cortex carries out 3D vision and figure-ground perception, and how these cortical mechanisms enable 2D pictures to generate 3D percepts of occluding and occluded objects. In particular, these models have proposed how border ownership occurs, but have not yet explicitly explained the correlation between multiple properties of border ownership neurons in cortical area V2 that were reported in a remarkable series of neurophysiological experiments by von der Heydt and his colleagues; namely, border ownership, contrast preference, binocular stereoscopic information, selectivity for side-of-figure, Gestalt rules, and strength of attentional modulation, as well as the time course during which such properties arise. This article shows how, by combining 3D LAMINART properties that were discovered in two parallel streams of research, a unified explanation of these properties emerges. This explanation proposes, moreover, how these properties contribute to the generation of consciously seen 3D surfaces. The first research stream models how processes like 3D boundary grouping and surface filling-in interact in multiple stages within and between the V1 interblob-V2 interstripe-V4 cortical stream and the V1 blob-V2 thin stripe-V4 cortical stream, respectively. Of particular importance for understanding figure-ground separation is how these cortical interactions convert computationally complementary boundary and surface mechanisms into a consistent conscious percept, including the critical use of surface contour feedback signals from surface representations in V2 thin stripes to boundary representations in V2 interstripes. Remarkably, key figure-ground properties emerge from these feedback interactions. The second research stream shows how cells that compute absolute disparity in cortical area V1 are transformed into cells that compute relative disparity in cortical area V2. Relative disparity is a more invariant measure of an object's depth and 3D shape, and is sensitive to figure-ground properties.
Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling
NASA Astrophysics Data System (ADS)
Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.
2016-04-01
Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.
Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods
NASA Astrophysics Data System (ADS)
Liu, Qinya; Tromp, Jeroen
2008-07-01
We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.
Hepatic differentiation of human iPSCs in different 3D models: A comparative study.
Meier, Florian; Freyer, Nora; Brzeszczynska, Joanna; Knöspel, Fanny; Armstrong, Lyle; Lako, Majlinda; Greuel, Selina; Damm, Georg; Ludwig-Schwellinger, Eva; Deschl, Ulrich; Ross, James A; Beilmann, Mario; Zeilinger, Katrin
2017-12-01
Human induced pluripotent stem cells (hiPSCs) are a promising source from which to derive distinct somatic cell types for in vitro or clinical use. Existent protocols for hepatic differentiation of hiPSCs are primarily based on 2D cultivation of the cells. In the present study, the authors investigated the generation of hiPSC-derived hepatocyte-like cells using two different 3D culture systems: A 3D scaffold-free microspheroid culture system and a 3D hollow-fiber perfusion bioreactor. The differentiation outcome in these 3D systems was compared with that in conventional 2D cultures, using primary human hepatocytes as a control. The evaluation was made based on specific mRNA expression, protein secretion, antigen expression and metabolic activity. The expression of α-fetoprotein was lower, while cytochrome P450 1A2 or 3A4 activities were higher in the 3D culture systems as compared with the 2D differentiation system. Cells differentiated in the 3D bioreactor showed an increased expression of albumin and hepatocyte nuclear factor 4α, as well as secretion of α-1-antitrypsin as compared with the 2D differentiation system, suggesting a higher degree of maturation. In contrast, the 3D scaffold-free microspheroid culture provides an easy and robust method to generate spheroids of a defined size for screening applications, while the bioreactor culture model provides an instrument for complex investigations under physiological-like conditions. In conclusion, the present study introduces two 3D culture systems for stem cell derived hepatic differentiation each demonstrating advantages for individual applications as well as benefits in comparison with 2D cultures.
Nietzer, Sarah; Baur, Florentin; Sieber, Stefan; Hansmann, Jan; Schwarz, Thomas; Stoffer, Carolin; Häfner, Heide; Gasser, Martin; Waaga-Gasser, Ana Maria; Walles, Heike; Dandekar, Gudrun
2016-07-01
Tumor models based on cancer cell lines cultured two-dimensionally (2D) on plastic lack histological complexity and functionality compared to the native microenvironment. Xenogenic mouse tumor models display higher complexity but often do not predict human drug responses accurately due to species-specific differences. We present here a three-dimensional (3D) in vitro colon cancer model based on a biological scaffold derived from decellularized porcine jejunum (small intestine submucosa+mucosa, SISmuc). Two different cell lines were used in monoculture or in coculture with primary fibroblasts. After 14 days of culture, we demonstrated a close contact of human Caco2 colon cancer cells with the preserved basement membrane on an ultrastructural level as well as morphological characteristics of a well-differentiated epithelium. To generate a tissue-engineered tumor model, we chose human SW480 colon cancer cells, a reportedly malignant cell line. Malignant characteristics were confirmed in 2D cell culture: SW480 cells showed higher vimentin and lower E-cadherin expression than Caco2 cells. In contrast to Caco2, SW480 cells displayed cancerous characteristics such as delocalized E-cadherin and nuclear location of β-catenin in a subset of cells. One central drawback of 2D cultures-especially in consideration of drug testing-is their artificially high proliferation. In our 3D tissue-engineered tumor model, both cell lines showed decreased numbers of proliferating cells, thus correlating more precisely with observations of primary colon cancer in all stages (UICC I-IV). Moreover, vimentin decreased in SW480 colon cancer cells, indicating a mesenchymal to epithelial transition process, attributed to metastasis formation. Only SW480 cells cocultured with fibroblasts induced the formation of tumor-like aggregates surrounded by fibroblasts, whereas in Caco2 cocultures, a separate Caco2 cell layer was formed separated from the fibroblast compartment beneath. To foster tissue generation, a bioreactor was constructed for dynamic culture approaches. This induced a close tissue-like association of cultured tumor cells with fibroblasts reflecting tumor biopsies. Therapy with 5-fluorouracil (5-FU) was effective only in 3D coculture. In conclusion, our 3D tumor model reflects human tissue-related tumor characteristics, including lower tumor cell proliferation. It is now available for drug testing in metastatic context-especially for substances targeting tumor-stroma interactions.
NASA Astrophysics Data System (ADS)
Dong, H.; Kun, Z.; Zhang, L.
2015-12-01
This magnetotelluric (MT) system contains static shift correction and 3D inversion. The correction method is based on the data study on 3D forward modeling and field test. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with zero-cost, and avoids the additional field work and indoor processing with good results shown in Figure 1a-e. Figure 1a shows a normal model (I) without any local heterogeneity. Figure 1b shows a static-shifted model (II) with two local heterogeneous bodies (10 and 1000 ohm.m). Figure 1c is the inversion result (A) for the synthetic data generated from model I. Figure 1d is the inversion result (B) for the static-shifted data generated from model II. Figure 1e is the inversion result (C) for the static-shifted data from model II, but with static shift correction. The results show that the correction method is useful. The 3D inversion algorithm is improved base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the frequency based parallel structure, improved the computational efficiency, reduced the memory of computer, added the topographic and marine factors, and added the constraints of geology and geophysics. So the 3D inversion could even work in PAD with high efficiency and accuracy. The application example of theoretical assessment in oil and gas exploration is shown in Figure 1f-i. The synthetic geophysical model consists of five layers (from top to downwards): shale, limestone, gas, oil, groundwater and limestone overlying a basement rock. Figure 1f-g show the 3D model and central profile. Figure 1h shows the centrel section of 3D inversion, the resultsd show a high degree of reduction in difference on the synthetic model. Figure 1i shows the seismic waveform reflects the interfaces of every layer overall, but the relative positions of the interface of the two-way travel time vary, and the interface between limestone and oil at the sides of the section is not reflected. So 3-D MT can make up for the deficiency of the seismic results such as the fake sync-phase axis and multiple waves.
Use of laser 3D surface digitizer in data collection and 3D modeling of anatomical structures
NASA Astrophysics Data System (ADS)
Tse, Kelly; Van Der Wall, Hans; Vu, Dzung H.
2006-02-01
A laser digitizer (Konica-Minolta Vivid 910) is used to obtain 3-dimensional surface scans of anatomical structures with a maximum resolution of 0.1mm. Placing the specimen on a turntable allows multiple scans allaround because the scanner only captures data from the portion facing its lens. A computer model is generated using 3D modeling software such as Geomagic. The 3D model can be manipulated on screen for repeated analysis of anatomical features, a useful capability when the specimens are rare or inaccessible (museum collection, fossils, imprints in rock formation.). As accurate measurements can be performed on the computer model, instead of taking measurements on actual specimens only at the archeological excavation site e.g., a variety of quantitative data can be later obtained on the computer model in the laboratory as new ideas come to mind. Our group had used a mechanical contact digitizer (Microscribe) for this purpose, but with the surface digitizer, we have been obtaining data sets more accurately and more quickly.
Artificial Intelligence - Research and Applications
1975-05-01
G, »aln H, Harrow A, Brain B, Deutsch P, Duda R, Flues T, Garvey P. Hart G, Hendrlx 0, Lynch B. Meyer M. Pattner C . Sacerdotl D ...System a. The Procedural Net b. Task-Specific Knowledge c . The Planning Algorithm d . The Execution Algorithm 3. The Semantics of Assembly and...101 3. Querying State Description Models 103 a. Truth Values 103 b. Generators Instead of Backtracking 104 c . The Query Functions 107 d
Sonification for geoscience: Listening to faults from the inside
NASA Astrophysics Data System (ADS)
Barrett, Natasha; Mair, Karen
2014-05-01
Here we investigate the use of sonification for geoscience by sonifying the data generated in computer models of earthquake processes. Using mainly parameter mapping sonification, we explore data from our recent 3D DEM (discrete element method) models where granular debris is sheared between rough walls to simulate an evolving fault (e.g. Mair and Abe, 2011). To best appreciate the inherently 3D nature of the crushing and sliding events (continuously tracked in our models) that occur as faults slip, we use Ambisonics (a sound field recreation technology). This allows the position of individual events to be preserved generating a virtual 3D soundscape so we can explore faults from the inside. The addition of 3D audio to the sonification tool palate further allows us to more accurately connect to spatial data in a novel and engaging manner. During sonification, events such as grain scale fracturing, grain motions and interactions are mapped to specific sounds whose pitch, timbre, and volume reflect properties such as the depth, character, and size of the individual events. Our interactive and real-time approaches allow the listener to actively explore the data in time and space, listening to evolving processes by navigating through the spatial data via a 3D mouse controller. The soundscape can be heard either through an array of speakers or using a pair of headphones. Emergent phenomena in the models generate clear sound patterns that are easily spotted. Also, because our ears are excellent signal-to-noise filters, events are recognizable above the background noise. Although these features may be detectable visually, using a different sense (and part of the brain) gives a fresh perspective and facilitates a rapid appreciation of 'signals' through audio awareness, rather than specific scientific training. For this reason we anticipate significant potential for the future use of sonification in the presentation, interpretation and communication of geoscience datasets to both experts and the general public.
NASA Astrophysics Data System (ADS)
MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.
2007-05-01
SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.
Computations underlying the visuomotor transformation for smooth pursuit eye movements
Murdison, T. Scott; Leclercq, Guillaume; Lefèvre, Philippe
2014-01-01
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit. PMID:25475344
3D culture models of Alzheimer's disease: a road map to a "cure-in-a-dish".
Choi, Se Hoon; Kim, Young Hye; Quinti, Luisa; Tanzi, Rudolph E; Kim, Doo Yeon
2016-12-09
Alzheimer's disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.
Delta: a new web-based 3D genome visualization and analysis platform.
Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua
2018-04-15
Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.
Monte Carlo generators for studies of the 3D structure of the nucleon
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
3D Reconstruction of Static Human Body with a Digital Camera
NASA Astrophysics Data System (ADS)
Remondino, Fabio
2003-01-01
Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.
NASA Technical Reports Server (NTRS)
Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.
NASA Astrophysics Data System (ADS)
Wang, Yanxing; Brasseur, James G.
2017-06-01
We evaluate the potential for physiological control of intestinal absorption by the generation of "micromixing layers" (MMLs) induced by coordinated motions of mucosal villi coupled with lumen-scale "macro" eddying motions generated by gut motility. To this end, we apply a three-dimensional (3D) multigrid lattice-Boltzmann model of a lid-driven macroscale cavity flow with microscale fingerlike protuberances at the lower surface. Integrated with a previous 2D study of leaflike villi, we generalize to 3D the 2D mechanisms found there to enhance nutrient absorption by controlled villi motility. In three dimensions, increased lateral spacing within villi within groups that move axially with the macroeddy reduces MML strength and absorptive enhancement relative to two dimensions. However, lateral villi motions create helical 3D particle trajectories that enhance absorption rate to the level of axially moving 2D leaflike villi. The 3D enhancements are associated with interesting fundamental adjustments to 2D micro-macro-motility coordination mechanisms and imply a refined potential for physiological or pharmaceutical control of intestinal absorption.
Technical note: 3D from standard digital photography of human crania-a preliminary assessment.
Katz, David; Friess, Martin
2014-05-01
This study assessed three-dimensional (3D) photogrammetry as a tool for capturing and quantifying human skull morphology. While virtual reconstruction with 3D surface scanning technology has become an accepted part of the paleoanthropologist's tool kit, recent advances in 3D photogrammetry make it a potential alternative to dedicated surface scanners. The principal advantages of photogrammetry are more rapid raw data collection, simplicity and portability of setup, and reduced equipment costs. We tested the precision and repeatability of 3D photogrammetry by comparing digital models of human crania reconstructed from conventional, 2D digital photographs to those generated using a 3D surface scanner. Overall, the photogrammetry and scanner meshes showed low degrees of deviation from one another. Surface area estimates derived from photogrammetry models tended to be slightly larger. Landmark configurations generally did not cluster together based upon whether the reconstruction was created with photogrammetry or surface scanning technology. Average deviations of landmark coordinates recorded on photogrammetry models were within the generally allowable range of error in osteometry. Thus, while dependent upon the needs of the particular research project, 3D photogrammetry appears to be a suitable, lower-cost alternative to 3D imaging and scanning options. Copyright © 2014 Wiley Periodicals, Inc.
Numerical fatigue 3D-FE modeling of indirect composite-restored posterior teeth.
Ausiello, Pietro; Franciosa, Pasquale; Martorelli, Massimo; Watts, David C
2011-05-01
In restored teeth, stresses at the tooth-restoration interface during masticatory processes may fracture the teeth or the restoration and cracks may grow and propagate. The aim was to apply numerical methodologies to simulate the behavior of a restored tooth and to evaluate fatigue lifetimes before crack failure. Using a CAD-FEM procedure and fatigue mechanic laws, the fatigue damage of a restored molar was numerically estimated. Tessellated surfaces of enamel and dentin were extracted by applying segmentation and classification algorithms, to sets of 2D image data. A user-friendly GUI, which enables selection and visualization of 3D tessellated surfaces, was developed in a MatLab(®) environment. The tooth-boundary surfaces of enamel and dentin were then created by sweeping operations through cross-sections. A class II MOD cavity preparation was then added into the 3D model and tetrahedral mesh elements were generated. Fatigue simulation was performed by combining a preliminary static FEA simulation with classical fatigue mechanical laws. Regions with the shortest fatigue-life were located around the fillets of the class II MOD cavity, where the static stress was highest. The described method can be successfully adopted to generate detailed 3D-FE models of molar teeth, with different cavities and restorative materials. This method could be quickly implemented for other dental or biomechanical applications. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Raffan, Hazel; Guevar, Julien; Poyade, Matthieu; Rea, Paul M.
2017-01-01
Current methods used to communicate and present the complex arrangement of vasculature related to the brain and spinal cord is limited in undergraduate veterinary neuroanatomy training. Traditionally it is taught with 2-dimensional (2D) diagrams, photographs and medical imaging scans which show a fixed viewpoint. 2D representations of 3-dimensional (3D) objects however lead to loss of spatial information, which can present problems when translating this to the patient. Computer-assisted learning packages with interactive 3D anatomical models have become established in medical training, yet equivalent resources are scarce in veterinary education. For this reason, we set out to develop a workflow methodology creating an interactive model depicting the vasculature of the canine brain that could be used in undergraduate education. Using MR images of a dog and several commonly available software programs, we set out to show how combining image editing, segmentation and surface generation, 3D modeling and texturing can result in the creation of a fully interactive application for veterinary training. In addition to clearly identifying a workflow methodology for the creation of this dataset, we have also demonstrated how an interactive tutorial and self-assessment tool can be incorporated into this. In conclusion, we present a workflow which has been successful in developing a 3D reconstruction of the canine brain and associated vasculature through segmentation, surface generation and post-processing of readily available medical imaging data. The reconstructed model was implemented into an interactive application for veterinary education that has been designed to target the problems associated with learning neuroanatomy, primarily the inability to visualise complex spatial arrangements from 2D resources. The lack of similar resources in this field suggests this workflow is original within a veterinary context. There is great potential to explore this method, and introduce a new dimension into veterinary education and training. PMID:28192461
Raffan, Hazel; Guevar, Julien; Poyade, Matthieu; Rea, Paul M
2017-01-01
Current methods used to communicate and present the complex arrangement of vasculature related to the brain and spinal cord is limited in undergraduate veterinary neuroanatomy training. Traditionally it is taught with 2-dimensional (2D) diagrams, photographs and medical imaging scans which show a fixed viewpoint. 2D representations of 3-dimensional (3D) objects however lead to loss of spatial information, which can present problems when translating this to the patient. Computer-assisted learning packages with interactive 3D anatomical models have become established in medical training, yet equivalent resources are scarce in veterinary education. For this reason, we set out to develop a workflow methodology creating an interactive model depicting the vasculature of the canine brain that could be used in undergraduate education. Using MR images of a dog and several commonly available software programs, we set out to show how combining image editing, segmentation and surface generation, 3D modeling and texturing can result in the creation of a fully interactive application for veterinary training. In addition to clearly identifying a workflow methodology for the creation of this dataset, we have also demonstrated how an interactive tutorial and self-assessment tool can be incorporated into this. In conclusion, we present a workflow which has been successful in developing a 3D reconstruction of the canine brain and associated vasculature through segmentation, surface generation and post-processing of readily available medical imaging data. The reconstructed model was implemented into an interactive application for veterinary education that has been designed to target the problems associated with learning neuroanatomy, primarily the inability to visualise complex spatial arrangements from 2D resources. The lack of similar resources in this field suggests this workflow is original within a veterinary context. There is great potential to explore this method, and introduce a new dimension into veterinary education and training.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Usta, Taner A; Ozkaynak, Aysel; Kovalak, Ebru; Ergul, Erdinc; Naki, M Murat; Kaya, Erdal
2015-08-01
Two-dimensional (2D) view is known to cause practical difficulties for surgeons in conventional laparoscopy. Our goal was to evaluate whether the new-generation, Three-Dimensional Laparoscopic Vision System (3D LVS) provides greater benefit in terms of execution time and error number during the performance of surgical tasks. This study tests the hypothesis that the use of the new generation 3D LVS can significantly improve technical ability on complex laparoscopic tasks in an experimental model. Twenty-four participants (8 experienced, 8 minimally experienced, and 8 inexperienced) were evaluated for 10 different tasks in terms of total execution time and error number. The 4-point lickert scale was used for subjective assessment of the two imaging modalities. All tasks were completed by all participants. Statistically significant difference was determined between 3D and 2D systems in the tasks of bead transfer and drop, suturing, and pick-and-place in the inexperienced group; in the task of passing through two circles with the needle in the minimally experienced group; and in the tasks of bead transfer and drop, suturing and passing through two circles with the needle in the experienced group. Three-dimensional imaging was preferred over 2D in 6 of the 10 subjective criteria questions on 4-point lickert scale. The majority of the tasks were completed in a shorter time using 3D LVS compared to 2D LVS. The subjective Likert-scale ratings from each group also demonstrated a clear preference for 3D LVS. New 3D LVS has the potential to improve the learning curve, and reduce the operating time and error rate during the performances of laparoscopic surgeons. Our results suggest that the new-generation 3D HD LVS will be helpful for surgeons in laparoscopy (Clinical Trial ID: NCT01799577, Protocol ID: BEHGynobs-4).
Scaling depth-induced wave-breaking in two-dimensional spectral wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.
2015-03-01
Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.
Anatomic modeling using 3D printing: quality assurance and optimization.
Leng, Shuai; McGee, Kiaran; Morris, Jonathan; Alexander, Amy; Kuhlmann, Joel; Vrieze, Thomas; McCollough, Cynthia H; Matsumoto, Jane
2017-01-01
The purpose of this study is to provide a framework for the development of a quality assurance (QA) program for use in medical 3D printing applications. An interdisciplinary QA team was built with expertise from all aspects of 3D printing. A systematic QA approach was established to assess the accuracy and precision of each step during the 3D printing process, including: image data acquisition, segmentation and processing, and 3D printing and cleaning. Validation of printed models was performed by qualitative inspection and quantitative measurement. The latter was achieved by scanning the printed model with a high resolution CT scanner to obtain images of the printed model, which were registered to the original patient images and the distance between them was calculated on a point-by-point basis. A phantom-based QA process, with two QA phantoms, was also developed. The phantoms went through the same 3D printing process as that of the patient models to generate printed QA models. Physical measurement, fit tests, and image based measurements were performed to compare the printed 3D model to the original QA phantom, with its known size and shape, providing an end-to-end assessment of errors involved in the complete 3D printing process. Measured differences between the printed model and the original QA phantom ranged from -0.32 mm to 0.13 mm for the line pair pattern. For a radial-ulna patient model, the mean distance between the original data set and the scanned printed model was -0.12 mm (ranging from -0.57 to 0.34 mm), with a standard deviation of 0.17 mm. A comprehensive QA process from image acquisition to completed model has been developed. Such a program is essential to ensure the required accuracy of 3D printed models for medical applications.
NASA Astrophysics Data System (ADS)
Moussaoui, H.; Debayle, J.; Gavet, Y.; Delette, G.; Hubert, M.; Cloetens, P.; Laurencin, J.
2017-03-01
A strong correlation exists between the performance of Solid Oxide Cells (SOCs), working either in fuel cell or electrolysis mode, and their electrodes microstructure. However, the basic relationships between the three-dimensional characteristics of the microstructure and the electrode properties are not still precisely understood. Thus, several studies have been recently proposed in an attempt to improve the knowledge of such relations, which are essential before optimizing the microstructure, and hence, designing more efficient SOC electrodes. In that frame, an original model has been adapted to generate virtual 3D microstructures of typical SOCs electrodes. Both the oxygen electrode, which is made of porous LSCF, and the hydrogen electrodes, made of porous Ni-YSZ, have been studied. In this work, the synthetic microstructures are generated by the so-called 3D Gaussian `Random Field model'. The morphological representativeness of the virtual porous media have been validated on real 3D electrode microstructures of a commercial cell, obtained by X-ray nano-tomography at the European Synchrotron Radiation Facility (ESRF). This validation step includes the comparison of the morphological parameters like the phase covariance function and granulometry as well as the physical parameters like the `apparent tortuosity'. Finally, this validated tool will be used, in forthcoming studies, to identify the optimal microstructure of SOCs.
A New Approach for On-Demand Generation of Various Oxygen Tensions for In Vitro Hypoxia Models
Li, Chunyan; Chaung, Wayne; Mozayan, Cameron; Chabra, Ranjeev; Wang, Ping; Narayan, Raj K.
2016-01-01
The development of in vitro disease models closely mimicking the functions of human disease has captured increasing attention in recent years. Oxygen tensions and gradients play essential roles in modulating biological systems in both physiologic and pathologic events. Thus, controlling oxygen tension is critical for mimicking physiologically relevant in vivo environments for cell, tissue and organ research. We present a new approach for on-demand generation of various oxygen tensions for in vitro hypoxia models. Proof-of-concept prototypes have been developed for conventional cell culture microplate by immobilizing a novel oxygen-consuming biomaterial on the 3D-printed insert. For the first time, rapid (~3.8 minutes to reach 0.5% O2 from 20.9% O2) and precisely controlled oxygen tensions/gradients (2.68 mmHg per 50 μm distance) were generated by exposing the biocompatible biomaterial to the different depth of cell culture media. In addition, changing the position of 3D-printed inserts with immobilized biomaterials relative to the cultured cells resulted in controllable and rapid changes in oxygen tensions (<130 seconds). Compared to the current technologies, our approach allows enhanced spatiotemporal resolution and accuracy of the oxygen tensions. Additionally, it does not interfere with the testing environment while maintaining ease of use. The elegance of oxygen tension manipulation introduced by our new approach will drastically improve control and lower the technological barrier of entry for hypoxia studies. Since the biomaterials can be immobilized in any devices, including microfluidic devices and 3D-printed tissues or organs, it will serve as the basis for a new generation of experimental models previously impossible or very difficult to implement. PMID:27219067
Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M; Teichgraeber, John F; Gateno, Jaime; Xia, James J
2017-12-01
There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities.
A New Approach for On-Demand Generation of Various Oxygen Tensions for In Vitro Hypoxia Models.
Li, Chunyan; Chaung, Wayne; Mozayan, Cameron; Chabra, Ranjeev; Wang, Ping; Narayan, Raj K
2016-01-01
The development of in vitro disease models closely mimicking the functions of human disease has captured increasing attention in recent years. Oxygen tensions and gradients play essential roles in modulating biological systems in both physiologic and pathologic events. Thus, controlling oxygen tension is critical for mimicking physiologically relevant in vivo environments for cell, tissue and organ research. We present a new approach for on-demand generation of various oxygen tensions for in vitro hypoxia models. Proof-of-concept prototypes have been developed for conventional cell culture microplate by immobilizing a novel oxygen-consuming biomaterial on the 3D-printed insert. For the first time, rapid (~3.8 minutes to reach 0.5% O2 from 20.9% O2) and precisely controlled oxygen tensions/gradients (2.68 mmHg per 50 μm distance) were generated by exposing the biocompatible biomaterial to the different depth of cell culture media. In addition, changing the position of 3D-printed inserts with immobilized biomaterials relative to the cultured cells resulted in controllable and rapid changes in oxygen tensions (<130 seconds). Compared to the current technologies, our approach allows enhanced spatiotemporal resolution and accuracy of the oxygen tensions. Additionally, it does not interfere with the testing environment while maintaining ease of use. The elegance of oxygen tension manipulation introduced by our new approach will drastically improve control and lower the technological barrier of entry for hypoxia studies. Since the biomaterials can be immobilized in any devices, including microfluidic devices and 3D-printed tissues or organs, it will serve as the basis for a new generation of experimental models previously impossible or very difficult to implement.
Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M.; Teichgraeber, John F.; Gateno, Jaime
2017-01-01
Purpose There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. Methods The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. Result When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. Conclusion We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities. PMID:28432489
Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison
2017-11-01
Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
Wang, Yan; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang; Zhou, Luping
2018-07-01
Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures. Copyright © 2018 Elsevier Inc. All rights reserved.
Design and fabrication of complete dentures using CAD/CAM technology
Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi
2017-01-01
Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686
NASA Astrophysics Data System (ADS)
Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.
2016-02-01
A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medley, S. S.; Liu, D.; Gorelenkova, M. V.
2016-01-12
A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less
A novel technique for reference point generation to aid in intraoral scan alignment.
Renne, Walter G; Evans, Zachary P; Mennito, Anthony; Ludlow, Mark
2017-11-12
When using a completely digital workflow on larger prosthetic cases it is often difficult to communicate to the laboratory or chairside Computer Aided Design and Computer Aided Manufacturing system the provisional prosthetic information. The problem arises when common hard tissue data points are limited or non-existent such as in complete arch cases in which the 3D model of the complete arch provisional restorations must be aligned perfectly with the 3D model of the complete arch preparations. In these instances, soft tissue is not enough to ensure an accurate automatic or manual alignment due to a lack of well-defined reference points. A new technique is proposed for the proper digital alignment of the 3D virtual model of the provisional prosthetic to the 3D virtual model of the prepared teeth in cases where common and coincident hard tissue data points are limited. Clinical considerations: A technique is described in which fiducial composite resin dots are temporarily placed on the intraoral keratinized tissue in strategic locations prior to final impressions. These fiducial dots provide coincident and clear 3D data points that when scanned into a digital impression allow superimposition of the 3D models. Composite resin dots on keratinized tissue were successful at allowing accurate merging of provisional restoration and post-preparation 3D models for the purpose of using the provisional restorations as a guide for final CLINICAL SIGNIFICANCE: Composite resin dots placed temporarily on attached tissue were successful at allowing accurate merging of the provisional restoration 3D models to the preparation 3D models for the purposes of using the provisional restorations as a guide for final restoration design and manufacturing. In this case, they allowed precise superimposition of the 3D models made in the absence of any other hard tissue reference points, resulting in the fabrication of ideal final restorations. © 2017 Wiley Periodicals, Inc.
Olszewski, Raphael; Szymor, Piotr; Kozakiewicz, Marcin
2014-12-01
Our study aimed to determine the accuracy of a low-cost, paper-based 3D printer by comparing a dry human mandible to its corresponding three-dimensional (3D) model using a 3D measuring arm. One dry human mandible and its corresponding printed model were evaluated. The model was produced using DICOM data from cone beam computed tomography. The data were imported into Maxilim software, wherein automatic segmentation was performed, and the STL file was saved. These data were subsequently analysed, repaired, cut and prepared for printing with netfabb software. These prepared data were used to create a paper-based model of a mandible with an MCor Matrix 300 printer. Seventy-six anatomical landmarks were chosen and measured 20 times on the mandible and the model using a MicroScribe G2X 3D measuring arm. The distances between all the selected landmarks were measured and compared. Only landmarks with a point inaccuracy less than 30% were used in further analyses. The mean absolute difference for the selected 2016 measurements was 0.36 ± 0.29 mm. The mean relative difference was 1.87 ± 3.14%; however, the measurement length significantly influenced the relative difference. The accuracy of the 3D model printed using the paper-based, low-cost 3D Matrix 300 printer was acceptable. The average error was no greater than that measured with other types of 3D printers. The mean relative difference should not be considered the best way to compare studies. The point inaccuracy methodology proposed in this study may be helpful in future studies concerned with evaluating the accuracy of 3D rapid prototyping models. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
San José, Verónica; Bellot-Arcís, Carlos; Tarazona, Beatriz; Zamora, Natalia; O Lagravère, Manuel
2017-01-01
Background To compare the reliability and accuracy of direct and indirect dental measurements derived from two types of 3D virtual models: generated by intraoral laser scanning (ILS) and segmented cone beam computed tomography (CBCT), comparing these with a 2D digital model. Material and Methods One hundred patients were selected. All patients’ records included initial plaster models, an intraoral scan and a CBCT. Patients´ dental arches were scanned with the iTero® intraoral scanner while the CBCTs were segmented to create three-dimensional models. To obtain 2D digital models, plaster models were scanned using a conventional 2D scanner. When digital models had been obtained using these three methods, direct dental measurements were measured and indirect measurements were calculated. Differences between methods were assessed by means of paired t-tests and regression models. Intra and inter-observer error were analyzed using Dahlberg´s d and coefficients of variation. Results Intraobserver and interobserver error for the ILS model was less than 0.44 mm while for segmented CBCT models, the error was less than 0.97 mm. ILS models provided statistically and clinically acceptable accuracy for all dental measurements, while CBCT models showed a tendency to underestimate measurements in the lower arch, although within the limits of clinical acceptability. Conclusions ILS and CBCT segmented models are both reliable and accurate for dental measurements. Integration of ILS with CBCT scans would get dental and skeletal information altogether. Key words:CBCT, intraoral laser scanner, 2D digital models, 3D models, dental measurements, reliability. PMID:29410764
NASA Astrophysics Data System (ADS)
Jackson, Amiee; Ray, Lawrence A.; Dangi, Shusil; Ben-Zikri, Yehuda K.; Linte, Cristian A.
2017-03-01
With increasing resolution in image acquisition, the project explores capabilities of printing toward faithfully reflecting detail and features depicted in medical images. To improve safety and efficiency of orthopedic surgery and spatial conceptualization in training and education, this project focused on generating virtual models of orthopedic anatomy from clinical quality computed tomography (CT) image datasets and manufacturing life-size physical models of the anatomy using 3D printing tools. Beginning with raw micro CT data, several image segmentation techniques including thresholding, edge recognition, and region-growing algorithms available in packages such as ITK-SNAP, MITK, or Mimics, were utilized to separate bone from surrounding soft tissue. After converting the resulting data to a standard 3D printing format, stereolithography (STL), the STL file was edited using Meshlab, Netfabb, and Meshmixer. The editing process was necessary to ensure a fully connected surface (no loose elements), positive volume with manifold geometry (geometry possible in the 3D physical world), and a single, closed shell. The resulting surface was then imported into a "slicing" software to scale and orient for printing on a Flashforge Creator Pro. In printing, relationships between orientation, print bed volume, model quality, material use and cost, and print time were considered. We generated anatomical models of the hand, elbow, knee, ankle, and foot from both low-dose high-resolution cone-beam CT images acquired using the soon to be released scanner developed by Carestream, as well as scaled models of the skeletal anatomy of the arm and leg, together with life-size models of the hand and foot.
NASA Astrophysics Data System (ADS)
Marques, Luís.; Roca Cladera, Josep; Tenedório, José António
2017-10-01
The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.
NASA Astrophysics Data System (ADS)
Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.
2016-05-01
A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.
Application of CART3D to Complex Propulsion-Airframe Integration with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Hahn, Andrew S.
2012-01-01
Vehicle Sketch Pad (VSP) is an easy-to-use modeler used to generate aircraft geometries for use in conceptual design and analysis. It has been used in the past to generate metageometries for aerodynamic analyses ranging from handbook methods to Navier-Stokes computational fluid dynamics (CFD). As desirable as it is to bring high order analyses, such as CFD, into the conceptual design process, this has been difficult and time consuming in practice due to the manual nature of both surface and volume grid generation. Over the last couple of years, VSP has had a major upgrade of its surface triangulation and export capability. This has enhanced its ability to work with Cart3D, an inviscid, three dimensional fluid flow toolset. The combination of VSP and Cart3D allows performing inviscid CFD on complex geometries with relatively high productivity. This paper will illustrate the use of VSP with Cart3D through an example case of a complex propulsion-airframe integration (PAI) of an over-wing nacelle (OWN) airliner configuration.
Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted
2016-09-16
distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and
Attribute classification for generating GPR facies models
NASA Astrophysics Data System (ADS)
Tronicke, Jens; Allroggen, Niklas
2017-04-01
Ground-penetrating radar (GPR) is an established geophysical tool to explore near-surface sedimentary environments. It has been successfully used, for example, to reconstruct past depositional environments, to investigate sedimentary processes, to aid hydrogeological investigations, and to assist in hydrocarbon reservoir analog studies. Interpreting such 2D/3D GPR data, usually relies on concepts known as GPR facies analysis, in which GPR facies are defined as units composed of characteristic reflection patterns (in terms of reflection amplitude, continuity, geometry, and internal configuration). The resulting facies models are then interpreted in terms of depositional processes, sedimentary environments, litho-, and hydrofacies. Typically, such GPR facies analyses are implemented in a manual workflow being laborious and rather inefficient especially for 3D data sets. In addition, such a subjective strategy bears the potential of inconsistency because the outcome depends on the expertise and experience of the interpreter. In this presentation, we investigate the feasibility of delineating GPR facies in an objective and largely automated manner. Our proposed workflow relies on a three-step procedure. First, we calculate a variety of geometrical and physical attributes from processed 2D and 3D GPR data sets. Then, we analyze and evaluate this attribute data base (e.g., using statistical tools such as principal component analysis) to reduce its dimensionality and to avoid redundant information, respectively. Finally, we integrate the reduced data base using tools such as composite imaging, cluster analysis, and neural networks. Using field examples that have been acquired across different depositional environments, we demonstrate that the resulting 2D/3D facies models ease and improve the interpretation of GPR data. We conclude that our interpretation strategy allows to generate GPR facies models in a consistent and largely automated manner and might be helpful in variety near-surface applications.
Shutdown Dose Rate Analysis for the long-pulse D-D Operation Phase in KSTAR
NASA Astrophysics Data System (ADS)
Park, Jin Hun; Han, Jung-Hoon; Kim, D. H.; Joo, K. S.; Hwang, Y. S.
2017-09-01
KSTAR is a medium size fully superconducting tokamak. The deuterium-deuterium (D-D) reaction in the KSTAR tokamak generates neutrons with a peak yield of 3.5x1016 per second through a pulse operation of 100 seconds. The effect of neutron generation from full D-D high power KSTAR operation mode to the machine, such as activation, shutdown dose rate, and nuclear heating, are estimated for an assurance of safety during operation, maintenance, and machine upgrade. The nuclear heating of the in-vessel components, and neutron activation of the surrounding materials have been investigated. The dose rates during operation and after shutdown of KSTAR have been calculated by a 3D CAD model of KSTAR with the Monte Carlo code MCNP5 (neutron flux and decay photon), the inventory code FISPACT (activation and decay photon) and the FENDL 2.1 nuclear data library.
Construction of a three-dimensional interactive model of the skull base and cranial nerves.
Kakizawa, Yukinari; Hongo, Kazuhiro; Rhoton, Albert L
2007-05-01
The goal was to develop an interactive three-dimensional (3-D) computerized anatomic model of the skull base for teaching microneurosurgical anatomy and for operative planning. The 3-D model was constructed using commercially available software (Maya 6.0 Unlimited; Alias Systems Corp., Delaware, MD), a personal computer, four cranial specimens, and six dry bones. Photographs from at least two angles of the superior and lateral views were imported to the 3-D software. Many photographs were needed to produce the model in anatomically complex areas. Careful dissection was needed to expose important structures in the two views. Landmarks, including foramen, bone, and dura mater, were used as reference points. The 3-D model of the skull base and related structures was constructed using more than 300,000 remodeled polygons. The model can be viewed from any angle. It can be rotated 360 degrees in any plane using any structure as the focal point of rotation. The model can be reduced or enlarged using the zoom function. Variable transparencies could be assigned to any structures so that the structures at any level can be seen. Anatomic labels can be attached to the structures in the 3-D model for educational purposes. This computer-generated 3-D model can be observed and studied repeatedly without the time limitations and stresses imposed by surgery. This model may offer the potential to create interactive surgical exercises useful in evaluating multiple surgical routes to specific target areas in the skull base.
Three-dimensional effects for radio frequency antenna modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, M.D.; Batchelor, D.B.; Stallings, D.C.
1993-09-01
Electromagnetic field calculations for radio frequency (rf) antennas in two dimensions (2-D) neglect finite antenna length effects as well as the feeders leading to the main current strap. Comparisons with experiments indicate that these 2-D calculations can overestimate the loading of the antenna and fail to give the correct reactive behavior. To study the validity of the 2-D approximation, the Multiple Antenna Implementation System (MAntIS) has been used to perform 3-D modeling of the power spectrum, plasma loading, and inductance for a relevant loop antenna design. Effects on antenna performance caused by feeders to the main current strap, conducting sidewalls,more » and finite phase velocity are considered. The plasma impedance matrix for the loading calculation is generated by use of the ORION-1D code. The 3-D model is benchmarked with the 2-D model in the 2-D limit. For finite-length antennas, inductance calculations are found to be in much more reasonable agreement with experiments for 3-D modeling than for the 2-D estimates. The modeling shows that the feeders affect the launched power spectrum in an indirect way by forcing the driven rf current to return in the antenna sidewalls rather than in the plasma as in the 2-D model. Thus, the feeders have much more influence than the plasma on the currents that return in the sidewall. It has also been found that poloidal dependencies in the plasma impedance matrix can reduce the loading from that predicted in the 2-D model. For some plasma parameters, the combined 3-D effects can lead to a reduction in the predicted loading by as much as a factor of 2 from that given by the 2-D model.« less
Perica, Elizabeth; Sun, Zhonghua
2017-12-01
Recently, three-dimensional (3D) printing has shown great interest in medicine, and 3D printed models may be rendered as part of the pre-surgical planning process in order to better understand the complexities of an individual's anatomy. The aim of this study is to investigate the feasibility of utilising 3D printed liver models as clinical tools in pre-operative planning for resectable hepatocellular carcinoma (HCC) lesions. High-resolution contrast-enhanced computed tomography (CT) images were acquired and utilized to generate a patient-specific 3D printed liver model. Hepatic structures were segmented and edited to produce a printable model delineating intrahepatic anatomy and a resectable HCC lesion. Quantitative assessment of 3D model accuracy compared measurements of critical anatomical landmarks acquired from the original CT images, standard tessellation language (STL) files, and the 3D printed liver model. Comparative analysis of surveys completed by two radiologists investigated the clinical value of 3D printed liver models in radiology. The application of utilizing 3D printed liver models as tools in surgical planning for resectable HCC lesions was evaluated through kappa analysis of questionnaires completed by two abdominal surgeons. A scaled down multi-material 3D liver model delineating patient-specific hepatic anatomy and pathology was produced, requiring a total production time of 25.25 hours and costing a total of AUD $1,250. A discrepancy was found in the total mean of measurements at each stage of production, with a total mean of 18.28±9.31 mm for measurements acquired from the original CT data, 15.63±8.06 mm for the STL files, and 14.47±7.71 mm for the 3D printed liver model. The 3D liver model did not enhance the radiologists' perception of patient-specific anatomy or pathology. Kappa analysis of the surgeon's responses to survey questions yielded a percentage agreement of 80%, and a κ value of 0.38 (P=0.24) indicating fair agreement. Study outcomes indicate that there is minimal value in utilizing the 3D printed models in diagnostic radiology. The potential usefulness of utilizing patient-specific 3D printed liver models as tools in surgical planning and intraoperative guidance for HCC treatment is verified. However, the feasibility of this application is currently challenged by identified limitations in 3D model production, including the cost and time required for model production, and inaccuracies potentially introduced at each stage of model fabrication.
The 5th Generation model of Particle Physics
NASA Astrophysics Data System (ADS)
Lach, Theodore
2009-05-01
The Standard model of Particle Physics is able to account for all known HEP phenomenon, yet it is not able to predict the masses of the quarks or leptons nor can it explain why they have their respective values. The Checker Board Model (CBM) predicts that there are 5 generation of quarks and leptons and shows a pattern to those masses, namely each three quarks or leptons (within adjacent generations or within a generation) are related to each other by a geometric mean relationship. A 2D structure of the nucleus can be imaged as 2D plate spinning on its axis, it would for all practical circumstances appear to be a 3D object. The masses of the hypothesized ``up'' and ``dn'' quarks determined by the CBM are 237.31 MeV and 42.392 MeV respectively. These new quarks in addition to a lepton of 7.4 MeV make up one of the missing generations. The details of this new particle physics model can be found at the web site: checkerboard.dnsalias.net. The only areas were this theory conflicts with existing dogma is in the value of the mass of the Top quark. The particle found at Fermi Lab must be some sort of composite particle containing Top quarks.
3D Deep Learning Angiography (3D-DLA) from C-arm Conebeam CT.
Montoya, J C; Li, Y; Strother, C; Chen, G-H
2018-05-01
Deep learning is a branch of artificial intelligence that has demonstrated unprecedented performance in many medical imaging applications. Our purpose was to develop a deep learning angiography method to generate 3D cerebral angiograms from a single contrast-enhanced C-arm conebeam CT acquisition in order to reduce image artifacts and radiation dose. A set of 105 3D rotational angiography examinations were randomly selected from an internal data base. All were acquired using a clinical system in conjunction with a standard injection protocol. More than 150 million labeled voxels from 35 subjects were used for training. A deep convolutional neural network was trained to classify each image voxel into 3 tissue types (vasculature, bone, and soft tissue). The trained deep learning angiography model was then applied for tissue classification into a validation cohort of 8 subjects and a final testing cohort of the remaining 62 subjects. The final vasculature tissue class was used to generate the 3D deep learning angiography images. To quantify the generalization error of the trained model, we calculated the accuracy, sensitivity, precision, and Dice similarity coefficients for vasculature classification in relevant anatomy. The 3D deep learning angiography and clinical 3D rotational angiography images were subjected to a qualitative assessment for the presence of intersweep motion artifacts. Vasculature classification accuracy and 95% CI in the testing dataset were 98.7% (98.3%-99.1%). No residual signal from osseous structures was observed for any 3D deep learning angiography testing cases except for small regions in the otic capsule and nasal cavity compared with 37% (23/62) of the 3D rotational angiographies. Deep learning angiography accurately recreated the vascular anatomy of the 3D rotational angiography reconstructions without a mask. Deep learning angiography reduced misregistration artifacts induced by intersweep motion, and it reduced radiation exposure required to obtain clinically useful 3D rotational angiography. © 2018 by American Journal of Neuroradiology.
Comparative modeling without implicit sequence alignments.
Kolinski, Andrzej; Gront, Dominik
2007-10-01
The number of known protein sequences is about thousand times larger than the number of experimentally solved 3D structures. For more than half of the protein sequences a close or distant structural analog could be identified. The key starting point in a classical comparative modeling is to generate the best possible sequence alignment with a template or templates. With decreasing sequence similarity, the number of errors in the alignments increases and these errors are the main causes of the decreasing accuracy of the molecular models generated. Here we propose a new approach to comparative modeling, which does not require the implicit alignment - the model building phase explores geometric, evolutionary and physical properties of a template (or templates). The proposed method requires prior identification of a template, although the initial sequence alignment is ignored. The model is built using a very efficient reduced representation search engine CABS to find the best possible superposition of the query protein onto the template represented as a 3D multi-featured scaffold. The criteria used include: sequence similarity, predicted secondary structure consistency, local geometric features and hydrophobicity profile. For more difficult cases, the new method qualitatively outperforms existing schemes of comparative modeling. The algorithm unifies de novo modeling, 3D threading and sequence-based methods. The main idea is general and could be easily combined with other efficient modeling tools as Rosetta, UNRES and others.
New Directions in 3D Medical Modeling: 3D-Printing Anatomy and Functions in Neurosurgical Planning
Árnadóttir, Íris; Gíslason, Magnús; Ólafsson, Ingvar
2017-01-01
This paper illustrates the feasibility and utility of combining cranial anatomy and brain function on the same 3D-printed model, as evidenced by a neurosurgical planning case study of a 29-year-old female patient with a low-grade frontal-lobe glioma. We herein report the rapid prototyping methodology utilized in conjunction with surgical navigation to prepare and plan a complex neurosurgery. The method introduced here combines CT and MRI images with DTI tractography, while using various image segmentation protocols to 3D model the skull base, tumor, and five eloquent fiber tracts. This 3D model is rapid-prototyped and coregistered with patient images and a reported surgical navigation system, establishing a clear link between the printed model and surgical navigation. This methodology highlights the potential for advanced neurosurgical preparation, which can begin before the patient enters the operation theatre. Moreover, the work presented here demonstrates the workflow developed at the National University Hospital of Iceland, Landspitali, focusing on the processes of anatomy segmentation, fiber tract extrapolation, MRI/CT registration, and 3D printing. Furthermore, we present a qualitative and quantitative assessment for fiber tract generation in a case study where these processes are applied in the preparation of brain tumor resection surgery. PMID:29065569
A Downloadable Three-Dimensional Virtual Model of the Visible Ear
Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.
2008-01-01
Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433
Thievessen, Ingo; Fakhri, Nikta; Steinwachs, Julian; Kraus, Viola; McIsaac, R Scott; Gao, Liang; Chen, Bi-Chang; Baird, Michelle A; Davidson, Michael W; Betzig, Eric; Oldenbourg, Rudolf; Waterman, Clare M; Fabry, Ben
2015-11-01
Vinculin is filamentous (F)-actin-binding protein enriched in integrin-based adhesions to the extracellular matrix (ECM). Whereas studies in 2-dimensional (2D) tissue culture models have suggested that vinculin negatively regulates cell migration by promoting cytoskeleton-ECM coupling to strengthen and stabilize adhesions, its role in regulating cell migration in more physiologic, 3-dimensional (3D) environments is unclear. To address the role of vinculin in 3D cell migration, we analyzed the morphodynamics, migration, and ECM remodeling of primary murine embryonic fibroblasts (MEFs) with cre/loxP-mediated vinculin gene disruption in 3D collagen I cultures. We found that vinculin promoted 3D cell migration by increasing directional persistence. Vinculin was necessary for persistent cell protrusion, cell elongation, and stable cell orientation in 3D collagen, but was dispensable for lamellipodia formation, suggesting that vinculin-mediated cell adhesion to the ECM is needed to convert actin-based cell protrusion into persistent cell shape change and migration. Consistent with this finding, vinculin was necessary for efficient traction force generation in 3D collagen without affecting myosin II activity and promoted 3D collagen fiber alignment and macroscopical gel contraction. Our results suggest that vinculin promotes directionally persistent cell migration and tension-dependent ECM remodeling in complex 3D environments by increasing cell-ECM adhesion and traction force generation. © FASEB.
Development of an atmospheric infrared radiation model with high clouds for target detection
NASA Astrophysics Data System (ADS)
Bellisario, Christophe; Malherbe, Claire; Schweitzer, Caroline; Stein, Karin
2016-10-01
In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds. The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l'Imagerie et la Simulation des Scènes et de leur Environnement). The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance. Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shie, C.-H.; Simpson, J.; Starr, D.; Johnson, D.; Sud, Y.
2003-01-01
Real clouds and clouds systems are inherently three dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud system with large horizontal domains at the National Center for Atmospheric Research. The results indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D simulations of these same cases. The reason for the strong similarity between the 2D and 3D CRM simulations is that the observed large-scale advective tendencies of potential temperature, water vapor mixing ratio, and horizontal momentum were used as the main forcing in both the 2D and 3D models. Interestingly, the 2D and 3D versions of the CRM used in CSU and U.K. Met Office showed significant differences in the rainfall and cloud statistics for three ARM cases. The major objectives of this project are to calculate and axamine: (1)the surface energy and water budgets, (2) the precipitation processes in the convective and stratiform regions, (3) the cloud upward and downward mass fluxes in the convective and stratiform regions; (4) cloud characteristics such as size, updraft intensity and lifetime, and (5) the entrainment and detrainment rates associated with clouds and cloud systems that developed in TOGA COARE, GATE, SCSMEX, ARM and KWAJEX. Of special note is that the analyzed (model generated) data sets are all produced by the same current version of the GCE model, i.e. consistent model physics and configurations. Trajectory analyse and inert tracer calculation will be conducted to identify the differences and similarities in the organization of convection between simulated 2D and 3D cloud systems.
NASA Astrophysics Data System (ADS)
Shen, Yi; Diplas, Panayiotis
2008-01-01
SummaryComplex flow patterns generated by irregular channel topography, such as boulders, submerged large woody debris, riprap and spur dikes, provide unique habitat for many aquatic organisms. Numerical modeling of the flow structures surrounding these obstructions is challenging, yet it represents an important tool for aquatic habitat assessment. In this study, the ability of two- (2-D) and three-dimensional (3-D) computational fluid dynamics models to reproduce these localized complex flow features is examined. The 3-D model is validated with laboratory data obtained from the literature for the case of a flow around a hemisphere under emergent and submerged conditions. The performance of the 2-D and 3-D models is then evaluated by comparing the numerical results with field measurements of flow around several boulders located at a reach of the Smith River, a regulated mountainous stream, obtained at base and peak flows. Close agreement between measured values and the velocity profiles predicted by the two models is obtained outside the wakes behind the hemisphere and boulders. However, the results suggest that in the vicinity of these obstructions the 3-D model is better suited for reproducing the circulation flow behavior at both low and high discharges. Application of the 2-D and 3-D models to meso-scale stream flows of ecological significance is furthermore demonstrated by using a recently developed spatial hydraulic metric to quantify flow complexity surrounding a number of brown trout spawning sites. It is concluded that the 3-D model can provide a much more accurate description of the heterogeneous velocity patterns favored by many aquatic species over a broad range of flows, especially under deep flow conditions when the various obstructions are submerged. Issues pertaining to selection of appropriate models for a variety of flow regimes and potential implication of the 3-D model on the development of better habitat suitability criteria are discussed. The research suggests ways of improving the modeling practices for ecosystem management studies.
Cao, Yongqiang; Grossberg, Stephen
2012-02-01
A laminar cortical model of stereopsis and 3D surface perception is developed and simulated. The model shows how spiking neurons that interact in hierarchically organized laminar circuits of the visual cortex can generate analog properties of 3D visual percepts. The model describes how monocular and binocular oriented filtering interact with later stages of 3D boundary formation and surface filling-in in the LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes helps to explain how computationally complementary boundary and surface formation properties lead to a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, the disparity filter, which helps to solve the correspondence problem by eliminating false matches, is realized using inhibitory interneurons as part of the perceptual grouping process by horizontal connections in layer 2/3 of cortical area V2. The 3D sLAMINART model simulates 3D surface percepts that are consciously seen in 18 psychophysical experiments. These percepts include contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, Panum's limiting case, the Venetian blind illusion, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. The model hereby illustrates a general method of unlumping rate-based models that use the membrane equations of neurophysiology into models that use spiking neurons, and which may be embodied in VLSI chips that use spiking neurons to minimize heat production. Copyright © 2011 Elsevier Ltd. All rights reserved.
Yan, Yuanwei; Song, Liqing; Tsai, Ang-Chen; Ma, Teng; Li, Yan
2016-01-01
Conventional two-dimensional (2-D) culture systems cannot provide large numbers of human pluripotent stem cells (hPSCs) and their derivatives that are demanded for commercial and clinical applications in in vitro drug screening, disease modeling, and potentially cell therapy. The technologies that support three-dimensional (3-D) suspension culture, such as a stirred bioreactor, are generally considered as promising approaches to produce the required cells. Recently, suspension bioreactors have also been used to generate mini-brain-like structure from hPSCs for disease modeling, showing the important role of bioreactor in stem cell culture. This chapter describes a detailed culture protocol for neural commitment of hPSCs into neural progenitor cell (NPC) spheres using a spinner bioreactor. The basic steps to prepare hPSCs for bioreactor inoculation are illustrated from cell thawing to cell propagation. The method for generating NPCs from hPSCs in the spinner bioreactor along with the static control is then described. The protocol in this study can be applied to the generation of NPCs from hPSCs for further neural subtype specification, 3-D neural tissue development, or potential preclinical studies or clinical applications in neurological diseases.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2014-01-01
Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions.
Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2014-01-01
Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions
3D in vitro technology for drug discovery.
Hosseinkhani, Hossein
2012-02-01
Three-dimensional (3D) in vitro systems that can mimic organ and tissue structure and function in vivo, will be of great benefit for a variety of biological applications from basic biology to toxicity testing and drug discovery. There have been several attempts to generate 3D tissue models but most of these models require costly equipment, and the most serious disadvantage in them is that they are too far from the mature human organs in vivo. Because of these problems, research and development in drug discovery, toxicity testing and biotech industries are highly expensive, and involve sacrifice of countless animals and it takes several years to bring a single drug/product to the market or to find the toxicity or otherwise of chemical entities. Our group has been actively working on several alternative models by merging biomaterials science, nanotechnology and biological principles to generate 3D in vitro living organs, to be called "Human Organs-on-Chip", to mimic natural organ/tissues, in order to reduce animal testing and clinical trials. We have fabricated a novel type of mechanically and biologically bio-mimicking collagen-based hydrogel that would provide for interconnected mini-wells in which 3D cell/organ culture of human samples in a manner similar to human organs with extracellular matrix (ECM) molecules would be possible. These products mimic the physical, chemical, and biological properties of natural organs and tissues at different scales. This paper will review the outcome of our several experiments so far in this direction and the future perspectives.
a Smartphone-Based 3d Pipeline for the Creative Industry - the Replicate EU Project
NASA Astrophysics Data System (ADS)
Nocerino, E.; Lago, F.; Morabito, D.; Remondino, F.; Porzi, L.; Poiesi, F.; Rota Bulo, S.; Chippendale, P.; Locher, A.; Havlena, M.; Van Gool, L.; Eder, M.; Fötschl, A.; Hilsmann, A.; Kausch, L.; Eisert, P.
2017-02-01
During the last two decades we have witnessed great improvements in ICT hardware and software technologies. Three-dimensional content is starting to become commonplace now in many applications. Although for many years 3D technologies have been used in the generation of assets by researchers and experts, nowadays these tools are starting to become commercially available to every citizen. This is especially the case for smartphones, that are powerful enough and sufficiently widespread to perform a huge variety of activities (e.g. paying, calling, communication, photography, navigation, localization, etc.), including just very recently the possibility of running 3D reconstruction pipelines. The REPLICATE project is tackling this particular issue, and it has an ambitious vision to enable ubiquitous 3D creativity via the development of tools for mobile 3D-assets generation on smartphones/tablets. This article presents the REPLICATE project's concept and some of the ongoing activities, with particular attention being paid to advances made in the first year of work. Thus the article focuses on the system architecture definition, selection of optimal frames for 3D cloud reconstruction, automated generation of sparse and dense point clouds, mesh modelling techniques and post-processing actions. Experiments so far were concentrated on indoor objects and some simple heritage artefacts, however, in the long term we will be targeting a larger variety of scenarios and communities.
New insights into insect's silent flight. Part II: sound source and noise control
NASA Astrophysics Data System (ADS)
Xue, Qian; Geng, Biao; Zheng, Xudong; Liu, Geng; Dong, Haibo
2016-11-01
The flapping flight of aerial animals has excellent aerodynamic performance but meanwhile generates low noise. In this study, the unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for three-dimensional (3D) models of Tibicen linnei cicada at free forward flight conditions. Single cicada wing is modelled as a membrane with prescribed motion reconstructed by Wan et al. (2015). The flow field and acoustic field around the flapping wing are solved with immersed-boundary-method based incompressible flow solver and linearized-perturbed-compressible-equations based acoustic solver. The 3D simulation allows examination of both directivity and frequency composition of the produced sound in a full space. The mechanism of sound generation of flapping wing is analyzed through correlations between acoustic signals and flow features. Along with a flexible wing model, a rigid wing model is also simulated. The results from these two cases will be compared to investigate the effects of wing flexibility on sound generation. This study is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.
Collective cell behavior on basement membranes floating in space
NASA Astrophysics Data System (ADS)
Ellison, Sarah; Bhattacharjee, Tapomoy; Morley, Cameron; Sawyer, W.; Angelini, Thomas
The basement membrane is an essential part of the polarity of endothelial and epithelial tissues. In tissue culture and organ-on-chip devices, monolayer polarity can be established by coating flat surfaces with extracellular matrix proteins and tuning the trans-substrate permeability. In epithelial 3D culture, spheroids spontaneously establish inside-out polarity, morphing into hollow shell-like structures called acini, generating their own basement membrane on the inner radius of the shell. However, 3D culture approaches generally lack the high degree of control provided by the 2D culture plate or organ-on-chip devices, making it difficult to create more faithful in vitro tissue models with complex surface curvature and morphology. Here we present a method for 3D printing complex basement membranes covered in cells. We 3D print collagen-I and Matrigel into a 3D growth medium made from jammed microgels. This soft, yielding material allows extracellular matrix to be formed as complex surfaces and shapes, floating in space. We then distribute MCF10A epithelial cells across the polymerized surface. We envision employing this strategy to study 3D collective cell behavior in numerous model tissue layers, beyond this simple epithelial model.
NASA Astrophysics Data System (ADS)
Szilagyi, John; Parchamy, Homaira; Masnavi, Majid; Richardson, Martin
2017-01-01
The absolute spectral irradiances of laser-plasmas produced from planar zinc targets are determined over a wavelength region of 150 to 250 nm. Strong spectral radiation is generated using 60 ns full-width-at-half-maximum, 1.0 μm wavelength laser pulses with incident laser intensities as low as ˜5 × 108 W cm-2. A typical radiation conversion efficiency of ˜2%/2πsr is measured. Numerical calculations using a comprehensive radiation-hydrodynamics model reveal the strong experimental spectra to originate mainly from 3d94s4p-3d94s2, 3d94s4d-3d94s4p, and 3d94p-3d94s, 3d94d-3d94p unresolved-transition arrays in singly and doubly ionized zinc, respectively.
NASA Astrophysics Data System (ADS)
Ragno, Rino; Ballante, Flavio; Pirolli, Adele; Wickersham, Richard B.; Patsilinakos, Alexandros; Hesse, Stéphanie; Perspicace, Enrico; Kirsch, Gilbert
2015-08-01
Vascular endothelial growth factor receptor-2, (VEGFR-2), is a key element in angiogenesis, the process by which new blood vessels are formed, and is thus an important pharmaceutical target. Here, 3-D quantitative structure-activity relationship (3-D QSAR) were used to build a quantitative screening and pharmacophore model of the VEGFR-2 receptors for design of inhibitors with improved activities. Most of available experimental data information has been used as training set to derive optimized and fully cross-validated eight mono-probe and a multi-probe quantitative models. Notable is the use of 262 molecules, aligned following both structure-based and ligand-based protocols, as external test set confirming the 3-D QSAR models' predictive capability and their usefulness in design new VEGFR-2 inhibitors. From a survey on literature, this is the first generation of a wide-ranging computational medicinal chemistry application on VEGFR2 inhibitors.
Skin tissue generation by laser cell printing.
Koch, Lothar; Deiwick, Andrea; Schlie, Sabrina; Michael, Stefanie; Gruene, Martin; Coger, Vincent; Zychlinski, Daniela; Schambach, Axel; Reimers, Kerstin; Vogt, Peter M; Chichkov, Boris
2012-07-01
For the aim of ex vivo engineering of functional tissue substitutes, Laser-assisted BioPrinting (LaBP) is under investigation for the arrangement of living cells in predefined patterns. So far three-dimensional (3D) arrangements of single or two-dimensional (2D) patterning of different cell types have been presented. It has been shown that cells are not harmed by the printing procedure. We now demonstrate for the first time the 3D arrangement of vital cells by LaBP as multicellular grafts analogous to native archetype and the formation of tissue by these cells. For this purpose, fibroblasts and keratinocytes embedded in collagen were printed in 3D as a simple example for skin tissue. To study cell functions and tissue formation process in 3D, different characteristics, such as cell localisation and proliferation were investigated. We further analysed the formation of adhering and gap junctions, which are fundamental for tissue morphogenesis and cohesion. In this study, it was demonstrated that LaBP is an outstanding tool for the generation of multicellular 3D constructs mimicking tissue functions. These findings are promising for the realisation of 3D in vitro models and tissue substitutes for many applications in tissue engineering. Copyright © 2012 Wiley Periodicals, Inc.
MO-B-BRD-01: Creation of 3D Printed Phantoms for Clinical Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehler, E.
This session is designed so that the learning objectives are practical. The intent is that the attendee may take home an understanding of not just the technology, but also the logistical steps necessary to execute these 3D printing techniques in the clinic. Four practical 3D printing topics will be discussed: (i) Creating bolus and compensators for photon machines; (ii) tools for proton therapy; (iii) clinical applications in imaging; (iv) custom phantom design for clinic and research use. The use of 3D printers within the radiation oncology setting is proving to be a useful tool for creating patient specific bolus andmore » compensators with the added benefit of cost savings. Creating the proper protocol is essential to ensuring that the desired effect is achieved and modeled in the treatment planning system. The critical choice of printer material (since it determines the interaction with the radiation) will be discussed. Selection of 3D printer type, design methods, verification of dose calculation, and the printing process will be detailed to give the basis for establishing your own protocol for electron and photon fields. A practical discussion of likely obstacles that may be encountered will be included. The diversity of systems and techniques in proton facilities leads to different facilities having very different requirements for beam modifying hardware and quality assurance devices. Many departments find the need to design and fabricate facility-specific equipment, making 3D printing an attractive technology. 3D printer applications in proton therapy will be discussed, including beam filters and compensators, and the design of proton therapy specific quality assurance tools. Quality control specific to 3D printing in proton therapy will be addressed. Advantages and disadvantages of different printing technology for these applications will also be discussed. 3D printing applications using high-resolution radiology-based imaging data will be presented. This data is used to 3D print individualized physical models of patient’s unique anatomy for aid in planning complex and challenging surgical procedures. Methods, techniques and imaging requirements for 3D printing anatomic models from imaging data will be discussed. Specific applications currently being used in the radiology clinic will be detailed. Standardized phantoms for radiation therapy are abundant. However, custom phantom designs can be advantageous for both clinical tasks and research. 3D printing is a useful method of custom fabrication that allows one to construct custom objects relatively quickly. Possibilities for custom radiotherapy phantoms range from 3D printing a hollow shell and filling the shell with tissue equivalent materials to fully printing the entire phantom with materials that are tissue equivalent as well as suitable for 3D printing. A range of materials available for use in radiotherapy phantoms and in the case of phantoms for dosimetric measurements, this choice is critical. The necessary steps required will be discussed including: modalities of 3D model generation, 3D model requirements for 3D printing, generation of machine instructions from the 3D model, and 3D printing techniques, choice of phantoms material, and troubleshooting techniques for each step in the process. Case examples of 3D printed phantoms will be shown. Learning Objectives: Understand the types of 3D modeling software required to design your device, the file formats required for data transfer from design software to 3D printer, and general troubleshooting techniques for each step of the process. Learn the differences between materials and design for photons vs. electrons vs. protons. Understand the importance of material choice and design geometries for your custom phantoms. Learn specific steps of quality assurance and quality control for 3D printed beam filters and compensators for proton therapy. Learn of special 3D printing applications for imaging. Cunha: Research support from Phillips Healthcare.« less
MO-B-BRD-03: Principles, Pitfalls and Techniques of 3D Printing for Bolus and Compensators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.
This session is designed so that the learning objectives are practical. The intent is that the attendee may take home an understanding of not just the technology, but also the logistical steps necessary to execute these 3D printing techniques in the clinic. Four practical 3D printing topics will be discussed: (i) Creating bolus and compensators for photon machines; (ii) tools for proton therapy; (iii) clinical applications in imaging; (iv) custom phantom design for clinic and research use. The use of 3D printers within the radiation oncology setting is proving to be a useful tool for creating patient specific bolus andmore » compensators with the added benefit of cost savings. Creating the proper protocol is essential to ensuring that the desired effect is achieved and modeled in the treatment planning system. The critical choice of printer material (since it determines the interaction with the radiation) will be discussed. Selection of 3D printer type, design methods, verification of dose calculation, and the printing process will be detailed to give the basis for establishing your own protocol for electron and photon fields. A practical discussion of likely obstacles that may be encountered will be included. The diversity of systems and techniques in proton facilities leads to different facilities having very different requirements for beam modifying hardware and quality assurance devices. Many departments find the need to design and fabricate facility-specific equipment, making 3D printing an attractive technology. 3D printer applications in proton therapy will be discussed, including beam filters and compensators, and the design of proton therapy specific quality assurance tools. Quality control specific to 3D printing in proton therapy will be addressed. Advantages and disadvantages of different printing technology for these applications will also be discussed. 3D printing applications using high-resolution radiology-based imaging data will be presented. This data is used to 3D print individualized physical models of patient’s unique anatomy for aid in planning complex and challenging surgical procedures. Methods, techniques and imaging requirements for 3D printing anatomic models from imaging data will be discussed. Specific applications currently being used in the radiology clinic will be detailed. Standardized phantoms for radiation therapy are abundant. However, custom phantom designs can be advantageous for both clinical tasks and research. 3D printing is a useful method of custom fabrication that allows one to construct custom objects relatively quickly. Possibilities for custom radiotherapy phantoms range from 3D printing a hollow shell and filling the shell with tissue equivalent materials to fully printing the entire phantom with materials that are tissue equivalent as well as suitable for 3D printing. A range of materials available for use in radiotherapy phantoms and in the case of phantoms for dosimetric measurements, this choice is critical. The necessary steps required will be discussed including: modalities of 3D model generation, 3D model requirements for 3D printing, generation of machine instructions from the 3D model, and 3D printing techniques, choice of phantoms material, and troubleshooting techniques for each step in the process. Case examples of 3D printed phantoms will be shown. Learning Objectives: Understand the types of 3D modeling software required to design your device, the file formats required for data transfer from design software to 3D printer, and general troubleshooting techniques for each step of the process. Learn the differences between materials and design for photons vs. electrons vs. protons. Understand the importance of material choice and design geometries for your custom phantoms. Learn specific steps of quality assurance and quality control for 3D printed beam filters and compensators for proton therapy. Learn of special 3D printing applications for imaging. Cunha: Research support from Phillips Healthcare.« less
MO-B-BRD-00: Clinical Applications of 3D Printing
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This session is designed so that the learning objectives are practical. The intent is that the attendee may take home an understanding of not just the technology, but also the logistical steps necessary to execute these 3D printing techniques in the clinic. Four practical 3D printing topics will be discussed: (i) Creating bolus and compensators for photon machines; (ii) tools for proton therapy; (iii) clinical applications in imaging; (iv) custom phantom design for clinic and research use. The use of 3D printers within the radiation oncology setting is proving to be a useful tool for creating patient specific bolus andmore » compensators with the added benefit of cost savings. Creating the proper protocol is essential to ensuring that the desired effect is achieved and modeled in the treatment planning system. The critical choice of printer material (since it determines the interaction with the radiation) will be discussed. Selection of 3D printer type, design methods, verification of dose calculation, and the printing process will be detailed to give the basis for establishing your own protocol for electron and photon fields. A practical discussion of likely obstacles that may be encountered will be included. The diversity of systems and techniques in proton facilities leads to different facilities having very different requirements for beam modifying hardware and quality assurance devices. Many departments find the need to design and fabricate facility-specific equipment, making 3D printing an attractive technology. 3D printer applications in proton therapy will be discussed, including beam filters and compensators, and the design of proton therapy specific quality assurance tools. Quality control specific to 3D printing in proton therapy will be addressed. Advantages and disadvantages of different printing technology for these applications will also be discussed. 3D printing applications using high-resolution radiology-based imaging data will be presented. This data is used to 3D print individualized physical models of patient’s unique anatomy for aid in planning complex and challenging surgical procedures. Methods, techniques and imaging requirements for 3D printing anatomic models from imaging data will be discussed. Specific applications currently being used in the radiology clinic will be detailed. Standardized phantoms for radiation therapy are abundant. However, custom phantom designs can be advantageous for both clinical tasks and research. 3D printing is a useful method of custom fabrication that allows one to construct custom objects relatively quickly. Possibilities for custom radiotherapy phantoms range from 3D printing a hollow shell and filling the shell with tissue equivalent materials to fully printing the entire phantom with materials that are tissue equivalent as well as suitable for 3D printing. A range of materials available for use in radiotherapy phantoms and in the case of phantoms for dosimetric measurements, this choice is critical. The necessary steps required will be discussed including: modalities of 3D model generation, 3D model requirements for 3D printing, generation of machine instructions from the 3D model, and 3D printing techniques, choice of phantoms material, and troubleshooting techniques for each step in the process. Case examples of 3D printed phantoms will be shown. Learning Objectives: Understand the types of 3D modeling software required to design your device, the file formats required for data transfer from design software to 3D printer, and general troubleshooting techniques for each step of the process. Learn the differences between materials and design for photons vs. electrons vs. protons. Understand the importance of material choice and design geometries for your custom phantoms. Learn specific steps of quality assurance and quality control for 3D printed beam filters and compensators for proton therapy. Learn of special 3D printing applications for imaging. Cunha: Research support from Phillips Healthcare.« less
MO-B-BRD-04: Sterilization for 3D Printed Brachytherapy Applicators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunha, J.
This session is designed so that the learning objectives are practical. The intent is that the attendee may take home an understanding of not just the technology, but also the logistical steps necessary to execute these 3D printing techniques in the clinic. Four practical 3D printing topics will be discussed: (i) Creating bolus and compensators for photon machines; (ii) tools for proton therapy; (iii) clinical applications in imaging; (iv) custom phantom design for clinic and research use. The use of 3D printers within the radiation oncology setting is proving to be a useful tool for creating patient specific bolus andmore » compensators with the added benefit of cost savings. Creating the proper protocol is essential to ensuring that the desired effect is achieved and modeled in the treatment planning system. The critical choice of printer material (since it determines the interaction with the radiation) will be discussed. Selection of 3D printer type, design methods, verification of dose calculation, and the printing process will be detailed to give the basis for establishing your own protocol for electron and photon fields. A practical discussion of likely obstacles that may be encountered will be included. The diversity of systems and techniques in proton facilities leads to different facilities having very different requirements for beam modifying hardware and quality assurance devices. Many departments find the need to design and fabricate facility-specific equipment, making 3D printing an attractive technology. 3D printer applications in proton therapy will be discussed, including beam filters and compensators, and the design of proton therapy specific quality assurance tools. Quality control specific to 3D printing in proton therapy will be addressed. Advantages and disadvantages of different printing technology for these applications will also be discussed. 3D printing applications using high-resolution radiology-based imaging data will be presented. This data is used to 3D print individualized physical models of patient’s unique anatomy for aid in planning complex and challenging surgical procedures. Methods, techniques and imaging requirements for 3D printing anatomic models from imaging data will be discussed. Specific applications currently being used in the radiology clinic will be detailed. Standardized phantoms for radiation therapy are abundant. However, custom phantom designs can be advantageous for both clinical tasks and research. 3D printing is a useful method of custom fabrication that allows one to construct custom objects relatively quickly. Possibilities for custom radiotherapy phantoms range from 3D printing a hollow shell and filling the shell with tissue equivalent materials to fully printing the entire phantom with materials that are tissue equivalent as well as suitable for 3D printing. A range of materials available for use in radiotherapy phantoms and in the case of phantoms for dosimetric measurements, this choice is critical. The necessary steps required will be discussed including: modalities of 3D model generation, 3D model requirements for 3D printing, generation of machine instructions from the 3D model, and 3D printing techniques, choice of phantoms material, and troubleshooting techniques for each step in the process. Case examples of 3D printed phantoms will be shown. Learning Objectives: Understand the types of 3D modeling software required to design your device, the file formats required for data transfer from design software to 3D printer, and general troubleshooting techniques for each step of the process. Learn the differences between materials and design for photons vs. electrons vs. protons. Understand the importance of material choice and design geometries for your custom phantoms. Learn specific steps of quality assurance and quality control for 3D printed beam filters and compensators for proton therapy. Learn of special 3D printing applications for imaging. Cunha: Research support from Phillips Healthcare.« less
MO-B-BRD-02: 3D Printing in the Clinic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remmes, N.
This session is designed so that the learning objectives are practical. The intent is that the attendee may take home an understanding of not just the technology, but also the logistical steps necessary to execute these 3D printing techniques in the clinic. Four practical 3D printing topics will be discussed: (i) Creating bolus and compensators for photon machines; (ii) tools for proton therapy; (iii) clinical applications in imaging; (iv) custom phantom design for clinic and research use. The use of 3D printers within the radiation oncology setting is proving to be a useful tool for creating patient specific bolus andmore » compensators with the added benefit of cost savings. Creating the proper protocol is essential to ensuring that the desired effect is achieved and modeled in the treatment planning system. The critical choice of printer material (since it determines the interaction with the radiation) will be discussed. Selection of 3D printer type, design methods, verification of dose calculation, and the printing process will be detailed to give the basis for establishing your own protocol for electron and photon fields. A practical discussion of likely obstacles that may be encountered will be included. The diversity of systems and techniques in proton facilities leads to different facilities having very different requirements for beam modifying hardware and quality assurance devices. Many departments find the need to design and fabricate facility-specific equipment, making 3D printing an attractive technology. 3D printer applications in proton therapy will be discussed, including beam filters and compensators, and the design of proton therapy specific quality assurance tools. Quality control specific to 3D printing in proton therapy will be addressed. Advantages and disadvantages of different printing technology for these applications will also be discussed. 3D printing applications using high-resolution radiology-based imaging data will be presented. This data is used to 3D print individualized physical models of patient’s unique anatomy for aid in planning complex and challenging surgical procedures. Methods, techniques and imaging requirements for 3D printing anatomic models from imaging data will be discussed. Specific applications currently being used in the radiology clinic will be detailed. Standardized phantoms for radiation therapy are abundant. However, custom phantom designs can be advantageous for both clinical tasks and research. 3D printing is a useful method of custom fabrication that allows one to construct custom objects relatively quickly. Possibilities for custom radiotherapy phantoms range from 3D printing a hollow shell and filling the shell with tissue equivalent materials to fully printing the entire phantom with materials that are tissue equivalent as well as suitable for 3D printing. A range of materials available for use in radiotherapy phantoms and in the case of phantoms for dosimetric measurements, this choice is critical. The necessary steps required will be discussed including: modalities of 3D model generation, 3D model requirements for 3D printing, generation of machine instructions from the 3D model, and 3D printing techniques, choice of phantoms material, and troubleshooting techniques for each step in the process. Case examples of 3D printed phantoms will be shown. Learning Objectives: Understand the types of 3D modeling software required to design your device, the file formats required for data transfer from design software to 3D printer, and general troubleshooting techniques for each step of the process. Learn the differences between materials and design for photons vs. electrons vs. protons. Understand the importance of material choice and design geometries for your custom phantoms. Learn specific steps of quality assurance and quality control for 3D printed beam filters and compensators for proton therapy. Learn of special 3D printing applications for imaging. Cunha: Research support from Phillips Healthcare.« less
Generating patient-specific pulmonary vascular models for surgical planning
NASA Astrophysics Data System (ADS)
Murff, Daniel; Co-Vu, Jennifer; O'Dell, Walter G.
2015-03-01
Each year in the U.S., 7.4 million surgical procedures involving the major vessels are performed. Many of our patients require multiple surgeries, and many of the procedures include "surgical exploration". Procedures of this kind come with a significant amount of risk, carrying up to a 17.4% predicted mortality rate. This is especially concerning for our target population of pediatric patients with congenital abnormalities of the heart and major pulmonary vessels. This paper offers a novel approach to surgical planning which includes studying virtual and physical models of pulmonary vasculature of an individual patient before operation obtained from conventional 3D X-ray computed tomography (CT) scans of the chest. These models would provide clinicians with a non-invasive, intricately detailed representation of patient anatomy, and could reduce the need for invasive planning procedures such as exploratory surgery. Researchers involved in the AirPROM project have already demonstrated the utility of virtual and physical models in treatment planning of the airways of the chest. Clinicians have acknowledged the potential benefit from such a technology. A method for creating patient-derived physical models is demonstrated on pulmonary vasculature extracted from a CT scan with contrast of an adult human. Using a modified version of the NIH ImageJ program, a series of image processing functions are used to extract and mathematically reconstruct the vasculature tree structures of interest. An auto-generated STL file is sent to a 3D printer to create a physical model of the major pulmonary vasculature generated from 3D CT scans of patients.
First Experiences with Kinect v2 Sensor for Close Range 3d Modelling
NASA Astrophysics Data System (ADS)
Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.
2015-02-01
RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.
Automatic Generation of Building Models with Levels of Detail 1-3
NASA Astrophysics Data System (ADS)
Nguatem, W.; Drauschke, M.; Mayer, H.
2016-06-01
We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.
Fully kinetic particle simulations of high pressure streamer propagation
NASA Astrophysics Data System (ADS)
Rose, David; Welch, Dale; Thoma, Carsten; Clark, Robert
2012-10-01
Streamer and leader formation in high pressure devices is a dynamic process involving a hierarchy of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. We have performed 2D and 3D fully EM implicit particle-in-cell simulation model of gas breakdown leading to streamer formation under DC and RF fields. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm [D. R. Welch, et al., J. Comp. Phys. 227, 143 (2007)] that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge. These models are being applied to the analysis of high-pressure gas switches [D. V. Rose, et al., Phys. Plasmas 18, 093501 (2011)] and gas-filled RF accelerator cavities [D. V. Rose, et al. Proc. IPAC12, to appear].
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph; Kopasakis, George
2016-01-01
An overview of recent applications of the FUN3D CFD code to computational aeroelastic, sonic boom, and aeropropulsoservoelasticity (APSE) analyses of a low-boom supersonic configuration is presented. The overview includes details of the computational models developed including multiple unstructured CFD grids suitable for aeroelastic and sonic boom analyses. In addition, aeroelastic Reduced-Order Models (ROMs) are generated and used to rapidly compute the aeroelastic response and utter boundaries at multiple flight conditions.
Hoelting, Lisa; Scheinhardt, Benjamin; Bondarenko, Olesja; Schildknecht, Stefan; Kapitza, Marion; Tanavde, Vivek; Tan, Betty; Lee, Qian Yi; Mecking, Stefan; Leist, Marcel; Kadereit, Suzanne
2013-04-01
Nanoparticles (NPs) have been shown to accumulate in organs, cross the blood-brain barrier and placenta, and have the potential to elicit developmental neurotoxicity (DNT). Here, we developed a human embryonic stem cell (hESC)-derived 3-dimensional (3-D) in vitro model that allows for testing of potential developmental neurotoxicants. Early central nervous system PAX6(+) precursor cells were generated from hESCs and differentiated further within 3-D structures. The 3-D model was characterized for neural marker expression revealing robust differentiation toward neuronal precursor cells, and gene expression profiling suggested a predominantly forebrain-like development. Altered neural gene expression due to exposure to non-cytotoxic concentrations of the known developmental neurotoxicant, methylmercury, indicated that the 3-D model could detect DNT. To test for specific toxicity of NPs, chemically inert polyethylene NPs (PE-NPs) were chosen. They penetrated deep into the 3-D structures and impacted gene expression at non-cytotoxic concentrations. NOTCH pathway genes such as HES5 and NOTCH1 were reduced in expression, as well as downstream neuronal precursor genes such as NEUROD1 and ASCL1. FOXG1, a patterning marker, was also reduced. As loss of function of these genes results in severe nervous system impairments in mice, our data suggest that the 3-D hESC-derived model could be used to test for Nano-DNT.
Efficient view based 3-D object retrieval using Hidden Markov Model
NASA Astrophysics Data System (ADS)
Jain, Yogendra Kumar; Singh, Roshan Kumar
2013-12-01
Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq
2018-01-01
3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.
4D Cone-beam CT reconstruction using a motion model based on principal component analysis
Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.
2011-01-01
Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852
NASA Astrophysics Data System (ADS)
Lo Brutto, M.; Ebolese, D.; Dardanelli, G.
2018-05-01
The photogrammetric survey of architectural Cultural Heritage is a very useful and standard process in order to obtain accurate 3D data for the documentation and visualization of historical buildings. In particular, the integration of terrestrial close-range photogrammetry and Remotely Piloted Aircraft Systems (RPASs) photogrammetry allows to create accurate and reliable 3D models of buildings and to monitor their state of conservation. The use of RPASs has indeed become more popular in Cultural Heritage survey to measure and detect areas that cannot normally be covered using terrestrial photogrammetry or terrestrial laser scanner. The paper presents the results of a photogrammetric survey executed to document the monumental complex of Villa Lampedusa ai Colli in Palermo (Italy), one of the most important historical buildings of the town. An integrated survey by close-range photogrammetry and RPAS photogrammetry was planned and carried out to reconstruct the 3D digital model of the monumental complex. Different images configurations (terrestrial, aerial nadiral, aerial parallel and oblique to the façades) have been acquired; data have been processed to verify the accuracy of the photogrammetric survey as regards the camera calibration parameters and the number of Ground Control Points (GCPs) measured on building façades. A very detailed 3D digital model and high-resolution ortho-images of the façades were obtained in order to carry out further analysis for historical studies, conservation and restoration project. The final 3D model of Villa Lampedusa ai Colli has been compared with a laser scanner 3D model to evaluate the quality of the photogrammetric approach. Beyond a purely metric assessment, 3D textured model has employed to generate 2D representations, useful for documentation purpose and to highlight the most significant damaged areas. 3D digital models and 2D representations can effectively contribute to monitor the state of conservation of historical buildings and become a very useful support for preliminary restoration works.
Three-dimensional elliptic grid generation for an F-16
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.
Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L
2016-10-01
Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benson, D.J.; Hallquist, J.O.; Stillman, D.W.
1985-04-01
Crashworthiness engineering has always been a high priority at Lawrence Livermore National Laboratory because of its role in the safe transport of radioactive material for the nuclear power industry and military. As a result, the authors have developed an integrated, interactive set of finite element programs for crashworthiness analysis. The heart of the system is DYNA3D, an explicit, fully vectorized, large deformation structural dynamics code. DYNA3D has the following four capabilities that are critical for the efficient and accurate analysis of crashes: (1) fully nonlinear solid, shell, and beam elements for representing a structure, (2) a broad range of constitutivemore » models for representing the materials, (3) sophisticated contact algorithms for the impact interactions, and (4) a rigid body capability to represent the bodies away from the impact zones at a greatly reduced cost without sacrificing any accuracy in the momentum calculations. To generate the large and complex data files for DYNA3D, INGRID, a general purpose mesh generator, is used. It runs on everything from IBM PCs to CRAYS, and can generate 1000 nodes/minute on a PC. With its efficient hidden line algorithms and many options for specifying geometry, INGRID also doubles as a geometric modeller. TAURUS, an interactive post processor, is used to display DYNA3D output. In addition to the standard monochrome hidden line display, time history plotting, and contouring, TAURUS generates interactive color displays on 8 color video screens by plotting color bands superimposed on the mesh which indicate the value of the state variables. For higher quality color output, graphic output files may be sent to the DICOMED film recorders. We have found that color is every bit as important as hidden line removal in aiding the analyst in understanding his results. In this paper the basic methodologies of the programs are presented along with several crashworthiness calculations.« less
A statistical shape model of the human second cervical vertebra.
Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon
2015-07-01
Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
Laser irradiated fluorescent perfluorocarbon microparticles in 2-D and 3-D breast cancer cell models
NASA Astrophysics Data System (ADS)
Niu, Chengcheng; Wang, Long; Wang, Zhigang; Xu, Yan; Hu, Yihe; Peng, Qinghai
2017-03-01
Perfluorocarbon (PFC) droplets were studied as new generation ultrasound contrast agents via acoustic or optical droplet vaporization (ADV or ODV). Little is known about the ODV irradiated vaporization mechanisms of PFC-microparticle complexs and the stability of the new bubbles produced. In this study, fluorescent perfluorohexane (PFH) poly(lactic-co-glycolic acid) (PLGA) particles were used as a model to study the process of particle vaporization and bubble stability following excitation in two-dimensional (2-D) and three-dimensional (3-D) cell models. We observed localization of the fluorescent agent on the microparticle coating material initially and after vaporization under fluorescence microscopy. Furthermore, the stability and growth dynamics of the newly created bubbles were observed for 11 min following vaporization. The particles were co-cultured with 2-D cells to form 3-D spheroids and could be vaporized even when encapsulated within the spheroids via laser irradiation, which provides an effective basis for further work.
A systematic review of 3-D printing in cardiovascular and cerebrovascular diseases
Sun, Zhonghua; Lee, Shen-Yuan
2017-01-01
Objective: The application of 3-D printing has been increasingly used in medicine, with research showing many applications in cardiovascular disease. This systematic review analyzes those studies published about the applications of 3-D printed, patient-specific models in cardiovascular and cerebrovascular diseases. Methods: A search of PubMed/Medline and Scopus databases was performed to identify studies investigating the 3-D printing in cardiovascular and cerebrovascular diseases. Only studies based on patient’s medical images were eligible for review, while reports on in vitro phantom or review articles were excluded. Results: A total of 48 studies met selection criteria for inclusion in the review. A range of patient-specific 3-D printed models of different cardiovascular and cerebrovascular diseases were generated in these studies with most of them being developed using cardiac CT and MRI data, less commonly with 3-D invasive angiographic or echocardiographic images. The review of these studies showed high accuracy of 3-D printed, patient-specific models to represent complex anatomy of the cardiovascular and cerebrovascular system and depict various abnormalities, especially congenital heart diseases and valvular pathologies. Further, 3-D printing can serve as a useful education tool for both parents and clinicians, and a valuable tool for pre-surgical planning and simulation. Conclusion: This systematic review shows that 3-D printed models based on medical imaging modalities can accurately replicate complex anatomical structures and pathologies of the cardiovascular and cerebrovascular system. 3-D printing is a useful tool for both education and surgical planning in these diseases. PMID:28430115
Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan
2014-01-01
Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.
Applications of patient-specific 3D printing in medicine.
Heller, Martin; Bauer, Heide-Katharina; Goetze, Elisabeth; Gielisch, Matthias; Roth, Klaus E; Drees, Philipp; Maier, Gerrit S; Dorweiler, Bernhard; Ghazy, Ahmed; Neufurth, Meik; Müller, Werner E G; Schröder, Heinz C; Wang, Xiaohong; Vahl, Christian-Friedrich; Al-Nawas, Bilal
Already three decades ago, the potential of medical 3D printing (3DP) or rapid prototyping for improved patient treatment began to be recognized. Since then, more and more medical indications in different surgical disciplines have been improved by using this new technique. Numerous examples have demonstrated the enormous benefit of 3DP in the medical care of patients by, for example, planning complex surgical interventions preoperatively, reducing implantation steps and anesthesia times, and helping with intraoperative orientation. At the beginning of every individual 3D model, patient-specific data on the basis of computed tomography (CT), magnetic resonance imaging (MRI), or ultrasound data is generated, which is then digitalized and processed using computer-aided design/computer-aided manufacturing (CAD/CAM) software. Finally, the resulting data sets are used to generate 3D-printed models or even implants. There are a variety of different application areas in the various medical fields, eg, drill or positioning templates, or surgical guides in maxillofacial surgery, or patient-specific implants in orthopedics. Furthermore, in vascular surgery it is possible to visualize pathologies such as aortic aneurysms so as to improve the planning of surgical treatment. Although rapid prototyping of individual models and implants is already applied very successfully in regenerative medicine, most of the materials used for 3DP are not yet suitable for implantation in the body. Therefore, it will be necessary in future to develop novel therapy approaches and design new materials in order to completely reconstruct natural tissue.
Bioprinting 3D microfibrous scaffolds for engineering endothelialized myocardium and heart-on-a-chip
Zhang, Yu Shrike; Arneri, Andrea; Bersini, Simone; Shin, Su-Ryon; Zhu, Kai; Goli-Malekabadi, Zahra; Aleman, Julio; Colosi, Cristina; Busignani, Fabio; Dell'Erba, Valeria; Bishop, Colin; Shupe, Thomas; Demarchi, Danilo; Moretti, Matteo; Rasponi, Marco; Dokmeci, Mehmet Remzi; Atala, Anthony; Khademhosseini, Ali
2016-01-01
Engineering cardiac tissues and organ models remains a great challenge due to the hierarchical structure of the native myocardium. The need of integrating blood vessels brings additional complexity, limiting the available approaches that are suitable to produce integrated cardiovascular organoids. In this work we propose a novel hybrid strategy based on 3D bioprinting, to fabricate endothelialized myocardium. Enabled by the use of our composite bioink, endothelial cells directly bioprinted within microfibrous hydrogel scaffolds gradually migrated towards the peripheries of the microfibers to form a layer of confluent endothelium. Together with controlled anisotropy, this 3D endothelial bed was then seeded with cardiomyocytes to generate aligned myocardium capable of spontaneous and synchronous contraction. We further embedded the organoids into a specially designed microfluidic perfusion bioreactor to complete the endothelialized-myocardium-on-a-chip platform for cardiovascular toxicity evaluation. Finally, we demonstrated that such a technique could be translated to human cardiomyocytes derived from induced pluripotent stem cells to construct endothelialized human myocardium. We believe that our method for generation of endothelialized organoids fabricated through an innovative 3D bioprinting technology may find widespread applications in regenerative medicine, drug screening, and potentially disease modeling. PMID:27710832
Zhang, Yu Shrike; Arneri, Andrea; Bersini, Simone; Shin, Su-Ryon; Zhu, Kai; Goli-Malekabadi, Zahra; Aleman, Julio; Colosi, Cristina; Busignani, Fabio; Dell'Erba, Valeria; Bishop, Colin; Shupe, Thomas; Demarchi, Danilo; Moretti, Matteo; Rasponi, Marco; Dokmeci, Mehmet Remzi; Atala, Anthony; Khademhosseini, Ali
2016-12-01
Engineering cardiac tissues and organ models remains a great challenge due to the hierarchical structure of the native myocardium. The need of integrating blood vessels brings additional complexity, limiting the available approaches that are suitable to produce integrated cardiovascular organoids. In this work we propose a novel hybrid strategy based on 3D bioprinting, to fabricate endothelialized myocardium. Enabled by the use of our composite bioink, endothelial cells directly bioprinted within microfibrous hydrogel scaffolds gradually migrated towards the peripheries of the microfibers to form a layer of confluent endothelium. Together with controlled anisotropy, this 3D endothelial bed was then seeded with cardiomyocytes to generate aligned myocardium capable of spontaneous and synchronous contraction. We further embedded the organoids into a specially designed microfluidic perfusion bioreactor to complete the endothelialized-myocardium-on-a-chip platform for cardiovascular toxicity evaluation. Finally, we demonstrated that such a technique could be translated to human cardiomyocytes derived from induced pluripotent stem cells to construct endothelialized human myocardium. We believe that our method for generation of endothelialized organoids fabricated through an innovative 3D bioprinting technology may find widespread applications in regenerative medicine, drug screening, and potentially disease modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
Whitcomb, Mary Beth; Doval, John; Peters, Jason
2011-01-01
Ultrasonography has gained increased utility to diagnose pelvic fractures in horses; however, internal pelvic contours can be difficult to appreciate from external palpable landmarks. We developed three-dimensional (3D) simulations of the pelvic ultrasonographic examination to assist with translation of pelvic contours into two-dimensional (2D) images. Contiguous 1mm transverse computed tomography (CT) images were acquired through an equine femur and hemipelvis using a single slice helical scanner. 3D surface models were created using a DICOM reader and imported into a 3D modeling and animation program. The bone models were combined with a purchased 3D horse model and the skin made translucent to visualize pelvic surface contours. 3D models of ultrasound transducers were made from reference photos, and a thin sector shape was created to depict the ultrasound beam. Ultrasonographic examinations were simulated by moving transducers on the skin surface and rectally to produce images of pelvic structures. Camera angles were manipulated to best illustrate the transducer-beam-bone interface. Fractures were created in multiple configurations. Animations were exported as QuickTime movie files for use in presentations coupled with corresponding ultrasound videoclips. 3D models provide a link between ultrasonographic technique and image generation by depicting the interaction of the transducer, ultrasound beam, and structure of interest. The horse model was important to facilitate understanding of the location of pelvic structures relative to the skin surface. While CT acquisition time was brief, manipulation within the 3D software program was time intensive. Results were worthwhile from an instructional standpoint based on user feedback. © 2011 Veterinary Radiology & Ultrasound.
Performance evaluation of an automatic MGRF-based lung segmentation approach
NASA Astrophysics Data System (ADS)
Soliman, Ahmed; Khalifa, Fahmi; Alansary, Amir; Gimel'farb, Georgy; El-Baz, Ayman
2013-10-01
The segmentation of the lung tissues in chest Computed Tomography (CT) images is an important step for developing any Computer-Aided Diagnostic (CAD) system for lung cancer and other pulmonary diseases. In this paper, we introduce a new framework for validating the accuracy of our developed Joint Markov-Gibbs based lung segmentation approach using 3D realistic synthetic phantoms. These phantoms are created using a 3D Generalized Gauss-Markov Random Field (GGMRF) model of voxel intensities with pairwise interaction to model the 3D appearance of the lung tissues. Then, the appearance of the generated 3D phantoms is simulated based on iterative minimization of an energy function that is based on the learned 3D-GGMRF image model. These 3D realistic phantoms can be used to evaluate the performance of any lung segmentation approach. The performance of our segmentation approach is evaluated using three metrics, namely, the Dice Similarity Coefficient (DSC), the modified Hausdorff distance, and the Average Volume Difference (AVD) between our segmentation and the ground truth. Our approach achieves mean values of 0.994±0.003, 8.844±2.495 mm, and 0.784±0.912 mm3, for the DSC, Hausdorff distance, and the AVD, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, B; Peter, D; Covellone, B
2009-07-02
Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less
3D Digital Surveying and Modelling of Cave Geometry: Application to Paleolithic Rock Art.
González-Aguilera, Diego; Muñoz-Nieto, Angel; Gómez-Lahoz, Javier; Herrero-Pascual, Jesus; Gutierrez-Alonso, Gabriel
2009-01-01
3D digital surveying and modelling of cave geometry represents a relevant approach for research, management and preservation of our cultural and geological legacy. In this paper, a multi-sensor approach based on a terrestrial laser scanner, a high-resolution digital camera and a total station is presented. Two emblematic caves of Paleolithic human occupation and situated in northern Spain, "Las Caldas" and "Peña de Candamo", have been chosen to put in practise this approach. As a result, an integral and multi-scalable 3D model is generated which may allow other scientists, pre-historians, geologists…, to work on two different levels, integrating different Paleolithic Art datasets: (1) a basic level based on the accurate and metric support provided by the laser scanner; and (2) a advanced level using the range and image-based modelling.
Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets
NASA Astrophysics Data System (ADS)
Wu, Jun; Voytik-Harbin, Sherry L.; Filmer, David L.; Hoffman, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennis; Robinson, Joseph P.
2002-05-01
Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to its structure. Understanding this fibrous structure is very crucial in tissue engineering to develop the next generation of biomaterials for restoration of tissues and organs. In this paper, we integrate confocal microscopy imaging and image-processing techniques to analyze the structural properties of ECM. We describe a 2D fiber middle-line tracing algorithm and apply it via Euclidean distance maps (EDM) to extract accurate fibrous structure information, such as fiber diameter, length, orientation, and density, from single slices. Based on a 2D tracing algorithm, we extend our analysis to 3D tracing via Euclidean distance maps to extract 3D fibrous structure information. We use computer simulation to construct the 3D fibrous structure which is subsequently used to test our tracing algorithms. After further image processing, these models are then applied to a variety of ECM constructions from which results of 2D and 3D traces are statistically analyzed.
Metastatic melanoma moves on: translational science in the era of personalized medicine.
Levesque, Mitchell P; Cheng, Phil F; Raaijmakers, Marieke I G; Saltari, Annalisa; Dummer, Reinhard
2017-03-01
Progress in understanding and treating metastatic melanoma is the result of decades of basic and translational research as well as the development of better in vitro tools for modeling the disease. Here, we review the latest therapeutic options for metastatic melanoma and the known genetic and non-genetic mechanisms of resistance to these therapies, as well as the in vitro toolbox that has provided the greatest insights into melanoma progression. These include next-generation sequencing technologies and more complex 2D and 3D cell culture models to functionally test the data generated by genomics approaches. The combination of hypothesis generating and hypothesis testing paradigms reviewed here will be the foundation for the next phase of metastatic melanoma therapies in the coming years.
Generalization Technique for 2D+SCALE Dhe Data Model
NASA Astrophysics Data System (ADS)
Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.
Integration of Technology into the Classroom: Case Studies.
ERIC Educational Resources Information Center
Johnson, D. LaMont, Ed.; Maddux, Cleborne D., Ed.; Liu, Leping, Ed.
This book contains the following case studies on the integration of technology in education: (1) "First Steps toward a Statistically Generated Information Technology Integration Model" (D. LaMont Johnson and Leping Liu); (2) "Case Studies: Are We Rejecting Rigor or Rediscovering Richness?" (Cleborne D. Maddux); (3)…
NASA Astrophysics Data System (ADS)
Xu, Qian
The Richtmyer-Meshkov Instability (RMI) (Commun. Pure Appl. Math 23, 297-319, 1960; Izv. Akad. Nauk. SSSR Maekh. Zhidk. Gaza. 4, 151-157, 1969) occurs due to an impulsive acceleration acting on a perturbed interface between two fluids of different densities. In the experiments presented in this thesis, single mode 3D RMI experiments are performed. An oscillating speaker generates a single mode sinusoidal initial perturbation at an interface of two gases, air and SF6. A Mach 1.19 shock wave accelerates the interface and generates the Richtmyer-Meshkov Instability. Both gases are seeded with propylene glycol particles which are illuminated by an Nd: YLF pulsed laser. Three high-speed video cameras record image sequences of the experiment. Particle Image Velocimetry (PIV) is applied to measure the velocity field. Measurements of the amplitude for both spike and bubble are obtained, from which the growth rate is measured. For both spike and bubble experiments, amplitude and growth rate match the linear stability theory at early time, but fall into a non-linear region with amplitude measurements lying between the modified 3D Sadot et al. model ( Phys. Rev. Lett. 80, 1654-1657, 1998) and the Zhang & Sohn model (Phys. Fluids 9. 1106-1124, 1997; Z. Angew. Math Phys 50. 1-46, 1990) at late time. Amplitude and growth rate curves are found to lie above the modified 3D Sadot et al. model and below Zhang & Sohn model for the spike experiments. Conversely, for the bubble experiments, both amplitude and growth rate curves lie above the Zhang & Sohn model, and below the modified 3D Sadot et al. model. Circulation is also calculated using the vorticity and velocity fields from the PIV measurements. The calculated circulation are approximately equal and found to grow with time, a result that differs from the modified Jacobs and Sheeley's circulation model (Phys. Fluids 8, 405-415, 1996).
Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data
Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho
2017-01-01
With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models. PMID:28335486
Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.
Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho
2017-03-19
With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models.
Update: Advancement of Contact Dynamics Modeling for Human Spaceflight Simulation Applications
NASA Technical Reports Server (NTRS)
Brain, Thomas A.; Kovel, Erik B.; MacLean, John R.; Quiocho, Leslie J.
2017-01-01
Pong is a new software tool developed at the NASA Johnson Space Center that advances interference-based geometric contact dynamics based on 3D graphics models. The Pong software consists of three parts: a set of scripts to extract geometric data from 3D graphics models, a contact dynamics engine that provides collision detection and force calculations based on the extracted geometric data, and a set of scripts for visualizing the dynamics response with the 3D graphics models. The contact dynamics engine can be linked with an external multibody dynamics engine to provide an integrated multibody contact dynamics simulation. This paper provides a detailed overview of Pong including the overall approach and modeling capabilities, which encompasses force generation from contact primitives and friction to computational performance. Two specific Pong-based examples of International Space Station applications are discussed, and the related verification and validation using this new tool are also addressed.
Chiang, Kun-Chun; Yeh, Chun-Nan; Hsu, Jun-Te; Yeh, Ta-sen; Jan, Yi-yin; Wu, Chun-Te; Chen, Huang-Yang; Jwo, Shyh-Chuan; Takano, Masashi; Kittaka, Atsushi; Juang, Horng-Heng; Chen, Tai C.
2013-01-01
Pancreatic cancer is a lethal disease with no known effective chemotherapy and radiotherapy, and most patients are diagnosed in the late stage, making them unsuitable for surgery. Therefore, new therapeutic strategies are urgently needed. 1α,25-dihydroxyvitamin D3 [1α,25(OH)2D3] is known to possess antitumor actions in many cancer cells in vitro and in vivo models. However, its clinical use is hampered by hypercalcemia. In this study, we investigated the effectiveness and safety of a new generation, less calcemic analog of 1α,25(OH)2D3, 19-nor-2α-(3-hydroxypropyl)-1α,25-dihydroxyvitamin D3 (MART-10), in BxPC-3 human pancreatic carcinoma cells in vitro and in vivo. We demonstrate that MART-10 is at least 100-fold more potent than 1α,25(OH)2D3 in inhibiting BxPC-3 cell proliferation in a time- and dose-dependent manner, accompanied by a greater upregulation of cyclin-dependent kinase inhibitors p21 and p27 and a greater downregulation of cyclin D3 and cyclin-dependent kinases 4 and 5, leading to a greater increase in the fraction of cells in G0/G1 phase. No induction of apoptosis and no effect on Cdc25 phosphatases A and C were observed in the presence of either MART-10 or 1α,25(OH)2D3. In a xenograft mouse model, treatment with 0.3 µg/kg body weight of MART-10 twice/week for 3 weeks caused a greater suppression of BxPC-3 tumor growth than the same dose of 1α,25(OH)2D3 without inducing hypercalcemia and weight loss. In conclusion, MART-10 is a promising agent against pancreatic cancer growth. Further clinical trial is warranted. PMID:23549173
The Implications of 3D Thermal Structure on 1D Atmospheric Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blecic, Jasmina; Dobbs-Dixon, Ian; Greene, Thomas, E-mail: jasmina@nyu.edu
Using the atmospheric structure from a 3D global radiation-hydrodynamic simulation of HD 189733b and the open-source Bayesian Atmospheric Radiative Transfer (BART) code, we investigate the difference between the secondary-eclipse temperature structure produced with a 3D simulation and the best-fit 1D retrieved model. Synthetic data are generated by integrating the 3D models over the Spitzer , the Hubble Space Telescope ( HST ), and the James Web Space Telescope ( JWST ) bandpasses, covering the wavelength range between 1 and 11 μ m where most spectroscopically active species have pronounced features. Using the data from different observing instruments, we present detailedmore » comparisons between the temperature–pressure profiles recovered by BART and those from the 3D simulations. We calculate several averages of the 3D thermal structure and explore which particular thermal profile matches the retrieved temperature structure. We implement two temperature parameterizations that are commonly used in retrieval to investigate different thermal profile shapes. To assess which part of the thermal structure is best constrained by the data, we generate contribution functions for our theoretical model and each of our retrieved models. Our conclusions are strongly affected by the spectral resolution of the instruments included, their wavelength coverage, and the number of data points combined. We also see some limitations in each of the temperature parametrizations, as they are not able to fully match the complex curvatures that are usually produced in hydrodynamic simulations. The results show that our 1D retrieval is recovering a temperature and pressure profile that most closely matches the arithmetic average of the 3D thermal structure. When we use a higher resolution, more data points, and a parametrized temperature profile that allows more flexibility in the middle part of the atmosphere, we find a better match between the retrieved temperature and pressure profile and the arithmetic average. The Spitzer and HST simulated observations sample deep parts of the planetary atmosphere and provide fewer constraints on the temperature and pressure profile, while the JWST observations sample the middle part of the atmosphere, providing a good match with the middle and most complex part of the arithmetic average of the 3D temperature structure.« less
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
The 3D Reference Earth Model (REM-3D): Update and Outlook
NASA Astrophysics Data System (ADS)
Lekic, V.; Moulik, P.; Romanowicz, B. A.; Dziewonski, A. M.
2016-12-01
Elastic properties of the Earth's interior (e.g. density, rigidity, compressibility, anisotropy) vary spatially due to changes in temperature, pressure, composition, and flow. In the 20th century, seismologists have constructed reference models of how these quantities vary with depth, notably the PREM model of Dziewonski and Anderson (1981). These 1D reference earth models have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, more sophisticated efforts by seismologists have yielded several generations of models of how properties vary not only with depth, but also laterally. Yet, though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. We propose to overcome these challenges by compiling, reconciling, and distributing a long period (>15 s) reference seismic dataset, from which we will construct a 3D seismic reference model (REM-3D) for the Earth's mantle, which will come in two flavors: a long wavelength smoothly parameterized model and a set of regional profiles. Here, we summarize progress made in the construction of the reference long period dataset, and present preliminary versions of the REM-3D in order to illustrate the two flavors of REM-3D and their relative advantages and disadvantages. As a community reference model and with fully quantified uncertainties and tradeoffs, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. In this presentation, we outline the outlook for setting up advisory community working groups and the community workshop that would assess progress, evaluate model and dataset performance, identify avenues for improvement, and recommend strategies for maximizing model adoption in and utility for the deep Earth community.
Automatic Building Abstraction from Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Ley, A.; Hänsch, R.; Hellwich, O.
2017-09-01
Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.