Sample records for facilitates 3d object

  1. Solid object visualization of 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Nelson, Thomas R.; Bailey, Michael J.

    2000-04-01

    Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.

  2. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  3. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  4. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  5. Comparison of 3D displays using objective metrics

    NASA Astrophysics Data System (ADS)

    Havig, Paul; McIntire, John; Dixon, Sharon; Moore, Jason; Reis, George

    2008-04-01

    Previously, we (Havig, Aleva, Reis, Moore, and McIntire, 2007) presented a taxonomy for the development of three-dimensional (3D) displays. We proposed three levels of metrics: objective (in which physical measurements are made of the display), subjective (Likert-type rating scales to show preferences of the display), and subjective-objective (performance metrics in which one shows how the 3D display may be more or less useful than a 2D display or a different 3D display). We concluded that for each level of metric, drawing practical comparisons among currently disparate 3D displays is difficult. In this paper we attempt to define more clearly the objective metrics for 3D displays. We set out to collect and measure physical attributes of several 3D displays and compare the results. We discuss our findings in terms of both difficulties in making the measurements in the first place, due to the physical set-up of the display, to issues in comparing the results we found and comparing how similar (or dissimilar) two 3D displays may or may not be. We conclude by discussing the next steps in creating objective metrics for three-dimensional displays as well as a proposed way ahead for the other two levels of metrics based on our findings.

  6. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    PubMed

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  8. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  9. 3D printing facilitated scaffold-free tissue unit fabrication.

    PubMed

    Tan, Yu; Richards, Dylan J; Trusk, Thomas C; Visconti, Richard P; Yost, Michael J; Kindy, Mark S; Drake, Christopher J; Argraves, William Scott; Markwald, Roger R; Mei, Ying

    2014-06-01

    Tissue spheroids hold great potential in tissue engineering as building blocks to assemble into functional tissues. To date, agarose molds have been extensively used to facilitate fusion process of tissue spheroids. As a molding material, agarose typically requires low temperature plates for gelation and/or heated dispenser units. Here, we proposed and developed an alginate-based, direct 3D mold-printing technology: 3D printing microdroplets of alginate solution into biocompatible, bio-inert alginate hydrogel molds for the fabrication of scaffold-free tissue engineering constructs. Specifically, we developed a 3D printing technology to deposit microdroplets of alginate solution on calcium containing substrates in a layer-by-layer fashion to prepare ring-shaped 3D hydrogel molds. Tissue spheroids composed of 50% endothelial cells and 50% smooth muscle cells were robotically placed into the 3D printed alginate molds using a 3D printer, and were found to rapidly fuse into toroid-shaped tissue units. Histological and immunofluorescence analysis indicated that the cells secreted collagen type I playing a critical role in promoting cell-cell adhesion, tissue formation and maturation.

  10. Extracting 3D Parametric Curves from 2D Images of Helical Objects.

    PubMed

    Willcocks, Chris G; Jackson, Philip T G; Nelson, Carl J; Obara, Boguslaw

    2017-09-01

    Helical objects occur in medicine, biology, cosmetics, nanotechnology, and engineering. Extracting a 3D parametric curve from a 2D image of a helical object has many practical applications, in particular being able to extract metrics such as tortuosity, frequency, and pitch. We present a method that is able to straighten the image object and derive a robust 3D helical curve from peaks in the object boundary. The algorithm has a small number of stable parameters that require little tuning, and the curve is validated against both synthetic and real-world data. The results show that the extracted 3D curve comes within close Hausdorff distance to the ground truth, and has near identical tortuosity for helical objects with a circular profile. Parameter insensitivity and robustness against high levels of image noise are demonstrated thoroughly and quantitatively.

  11. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  12. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  13. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  14. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal (audio/ image /video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...quality images , and digital video. The field of digital watermarking is relatively new, and many of its terms have not been well defined. Among the dif...ferent media types, watermarking of 2D still images is comparatively better studied. Inherently, digital water- marking of 3D objects remains a

  15. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  16. Aging preserves the ability to perceive 3D object shape from static but not deforming boundary contours.

    PubMed

    Norman, J Farley; Bartholomew, Ashley N; Burton, Cory L

    2008-09-01

    A single experiment investigated how younger (aged 18-32 years) and older (aged 62-82 years) observers perceive 3D object shape from deforming and static boundary contours. On any given trial, observers were shown two smoothly-curved objects, similar to water-smoothed granite rocks, and were required to judge whether they possessed the "same" or "different" shape. The objects presented during the "different" trials produced differently-shaped boundary contours. The objects presented during the "same" trials also produced different boundary contours, because one of the objects was always rotated in depth relative to the other by 5, 25, or 45 degrees. Each observer participated in 12 experimental conditions formed by the combination of 2 motion types (deforming vs. static boundary contours), 2 surface types (objects depicted as silhouettes or with texture and Lambertian shading), and 3 angular offsets (5, 25, and 45 degrees). When there was no motion (static silhouettes or stationary objects presented with shading and texture), the older observers performed as well as the younger observers. In the moving object conditions with shading and texture, the older observers' performance was facilitated by the motion, but the amount of this facilitation was reduced relative to that exhibited by the younger observers. In contrast, the older observers obtained no benefit in performance at all from the deforming (i.e., moving) silhouettes. The reduced ability of older observers to perceive 3D shape from motion is probably due to a low-level deterioration in the ability to detect and discriminate motion itself.

  17. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  18. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  19. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  20. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  1. Recognition of 3-D Scene with Partially Occluded Objects

    NASA Astrophysics Data System (ADS)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  2. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  3. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  4. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  5. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  6. A New 3D Object Pose Detection Method Using LIDAR Shape Set

    PubMed Central

    Kim, Jung-Un

    2018-01-01

    In object detection systems for autonomous driving, LIDAR sensors provide very useful information. However, problems occur because the object representation is greatly distorted by changes in distance. To solve this problem, we propose a LIDAR shape set that reconstructs the shape surrounding the object more clearly by using the LIDAR point information projected on the object. The LIDAR shape set restores object shape edges from a bird’s eye view by filtering LIDAR points projected on a 2D pixel-based front view. In this study, we use this shape set for two purposes. The first is to supplement the shape set with a LIDAR Feature map, and the second is to divide the entire shape set according to the gradient of the depth and density to create a 2D and 3D bounding box proposal for each object. We present a multimodal fusion framework that classifies objects and restores the 3D pose of each object using enhanced feature maps and shape-based proposals. The network structure consists of a VGG -based object classifier that receives multiple inputs and a LIDAR-based Region Proposal Networks (RPN) that identifies object poses. It works in a very intuitive and efficient manner and can be extended to other classes other than vehicles. Our research has outperformed object classification accuracy (Average Precision, AP) and 3D pose restoration accuracy (3D bounding box recall rate) based on the latest studies conducted with KITTI data sets. PMID:29547551

  7. A New 3D Object Pose Detection Method Using LIDAR Shape Set.

    PubMed

    Kim, Jung-Un; Kang, Hang-Bong

    2018-03-16

    In object detection systems for autonomous driving, LIDAR sensors provide very useful information. However, problems occur because the object representation is greatly distorted by changes in distance. To solve this problem, we propose a LIDAR shape set that reconstructs the shape surrounding the object more clearly by using the LIDAR point information projected on the object. The LIDAR shape set restores object shape edges from a bird's eye view by filtering LIDAR points projected on a 2D pixel-based front view. In this study, we use this shape set for two purposes. The first is to supplement the shape set with a LIDAR Feature map, and the second is to divide the entire shape set according to the gradient of the depth and density to create a 2D and 3D bounding box proposal for each object. We present a multimodal fusion framework that classifies objects and restores the 3D pose of each object using enhanced feature maps and shape-based proposals. The network structure consists of a VGG -based object classifier that receives multiple inputs and a LIDAR-based Region Proposal Networks (RPN) that identifies object poses. It works in a very intuitive and efficient manner and can be extended to other classes other than vehicles. Our research has outperformed object classification accuracy (Average Precision, AP) and 3D pose restoration accuracy (3D bounding box recall rate) based on the latest studies conducted with KITTI data sets.

  8. Automatic 3D inspection metrology for high-temperature objects

    NASA Astrophysics Data System (ADS)

    Han, Liya; Li, Zhongwei; Zhong, Kai; Yi, Jie; Shi, Yusheng; Cheng, Xu; Zhan, Guomin; Chen, Ran

    2017-06-01

    3D Visual Inspection for high-temperature objects has attracted more and more attention in the industrial and manufacture field. Until now it is still difficult to measure the shape of high-temperature objects due to the following problems: 1) the radiation and heat transfer through the air seriously affect both human and measurement equipment, so the manual measurement is not capable in this situation. 2) Because of the difficulties to handle the surfaces of the hot objects, it is hard to use artificial markers to align different pieces of data. In order to solve these problems, an automatic 3D shape measurement system for high-temperature objects is proposed by combing an industrial robot with a structured blue light 3D scanner. In this system, the route for inspection is planned with the cooled object and then executed automatically with the same object in hot state to avoid artificial operations. The route is carefully planned to reduce the exposure time of the measurement equipment under the high-temperature situation. Then different pieces of data are premapped during the planning procedure. In the executing procedure, they can be aligned accurately thanks to the good repeatability of the industrial robot. Finally, different pieces of data are merged without artificial markers and the results are better than methods with traditional hand-eye calibration. Experiments verify that the proposed system can conduct the inspection of forging parts under the temperature of 900°C and the alignment precision is 0.0013rad and 0.28mm.

  9. Combining heterogenous features for 3D hand-held object recognition

    NASA Astrophysics Data System (ADS)

    Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang

    2014-10-01

    Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.

  10. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  11. Embedding objects during 3D printing to add new functionalities

    PubMed Central

    2016-01-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning® Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning® Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication. These

  12. Planning 3-D collision-free paths using spheres

    NASA Technical Reports Server (NTRS)

    Bonner, Susan; Kelley, Robert B.

    1989-01-01

    A scheme for the representation of objects, the Successive Spherical Approximation (SSA), facilitates the rapid planning of collision-free paths in a 3-D, dynamic environment. The hierarchical nature of the SSA allows collision-free paths to be determined efficiently while still providing for the exact representation of dynamic objects. The concept of a freespace cell is introduced to allow human 3-D conceptual knowledge to be used in facilitating satisfying choices for paths. Collisions can be detected at a rate better than 1 second per environment object per path. This speed enables the path planning process to apply a hierarchy of rules to create a heuristically satisfying collision-free path.

  13. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  14. A wavelet-based Bayesian framework for 3D object segmentation in microscopy

    NASA Astrophysics Data System (ADS)

    Pan, Kangyu; Corrigan, David; Hillebrand, Jens; Ramaswami, Mani; Kokaram, Anil

    2012-03-01

    In confocal microscopy, target objects are labeled with fluorescent markers in the living specimen, and usually appear with irregular brightness in the observed images. Also, due to the existence of out-of-focus objects in the image, the segmentation of 3-D objects in the stack of image slices captured at different depth levels of the specimen is still heavily relied on manual analysis. In this paper, a novel Bayesian model is proposed for segmenting 3-D synaptic objects from given image stack. In order to solve the irregular brightness and out-offocus problems, the segmentation model employs a likelihood using the luminance-invariant 'wavelet features' of image objects in the dual-tree complex wavelet domain as well as a likelihood based on the vertical intensity profile of the image stack in 3-D. Furthermore, a smoothness 'frame' prior based on the a priori knowledge of the connections of the synapses is introduced to the model for enhancing the connectivity of the synapses. As a result, our model can successfully segment the in-focus target synaptic object from a 3D image stack with irregular brightness.

  15. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  16. Analysis of students’ spatial thinking in geometry: 3D object into 2D representation

    NASA Astrophysics Data System (ADS)

    Fiantika, F. R.; Maknun, C. L.; Budayasa, I. K.; Lukito, A.

    2018-05-01

    The aim of this study is to find out the spatial thinking process of students in transforming 3-dimensional (3D) object to 2-dimensional (2D) representation. Spatial thinking is helpful in using maps, planning routes, designing floor plans, and creating art. The student can engage geometric ideas by using concrete models and drawing. Spatial thinking in this study is identified through geometrical problems of transforming a 3-dimensional object into a 2-dimensional object image. The problem was resolved by the subject and analyzed by reference to predetermined spatial thinking indicators. Two representative subjects of elementary school were chosen based on mathematical ability and visual learning style. Explorative description through qualitative approach was used in this study. The result of this study are: 1) there are different representations of spatial thinking between a boy and a girl object, 2) the subjects has their own way to invent the fastest way to draw cube net.

  17. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    NASA Astrophysics Data System (ADS)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  18. High School Students' Forming 3D Objects Using Technological and Non-Technological Tools

    ERIC Educational Resources Information Center

    Okumus, Samet; Hollebrands, Karen

    2016-01-01

    We analyzed the ways in which two high school students formed 3D objects from the rotation of 2D figures. The students participated in a task-based interview using paper-and-pencil, manipulatives, and Cabri 3D. The results indicated that they had difficulty using paper-and-pencil to rotate 2D figures to form 3D objects. Their difficulty stemmed…

  19. Identification of geometric faces in hand-sketched 3D objects containing curved lines

    NASA Astrophysics Data System (ADS)

    El-Sayed, Ahmed M.; Wahdan, A. A.; Youssif, Aliaa A. A.

    2017-07-01

    The reconstruction of 3D objects from 2D line drawings is regarded as one of the key topics in the field of computer vision. The ongoing research is mainly focusing on the reconstruction of 3D objects that are mapped only from 2D straight lines, and that are symmetric in nature. Commonly, this approach only produces basic and simple shapes that are mostly flat or rather polygonized in nature, which is normally attributed to inability to handle curves. To overcome the above-mentioned limitations, a technique capable of handling non-symmetric drawings that encompass curves is considered. This paper discusses a novel technique that can be used to reconstruct 3D objects containing curved lines. In addition, it highlights an application that has been developed in accordance with the suggested technique that can convert a freehand sketch to a 3D shape using a mobile phone.

  20. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  1. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  2. Laminated Object Manufacturing of 3D-Printed Laser-Induced Graphene Foams.

    PubMed

    Luong, Duy Xuan; Subramanian, Ajay K; Silva, Gladys A Lopez; Yoon, Jongwon; Cofer, Savannah; Yang, Kaichun; Owuor, Peter Samora; Wang, Tuo; Wang, Zhe; Lou, Jun; Ajayan, Pulickel M; Tour, James M

    2018-05-29

    Laser-induced graphene (LIG), a graphene structure synthesized by a one-step process through laser treatment of commercial polyimide (PI) film in an ambient atmosphere, has been shown to be a versatile material in applications ranging from energy storage to water treatment. However, the process as developed produces only a 2D product on the PI substrate. Here, a 3D LIG foam printing process is developed on the basis of laminated object manufacturing, a widely used additive-manufacturing technique. A subtractive laser-milling process to yield further refinements to the 3D structures is also developed and shown here. By combining both techniques, various 3D graphene objects are printed. The LIG foams show good electrical conductivity and mechanical strength, as well as viability in various energy storage and flexible electronic sensor applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  4. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  5. Divided attention limits perception of 3-D object shapes

    PubMed Central

    Scharff, Alec; Palmer, John; Moore, Cathleen M.

    2013-01-01

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes. PMID:23404158

  6. A standardized set of 3-D objects for virtual reality research and applications.

    PubMed

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  7. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  8. A 3D Scan Model and Thermal Image Data Fusion Algorithms for 3D Thermography in Medicine

    PubMed Central

    Klima, Ondrej

    2017-01-01

    Objectives At present, medical thermal imaging is still considered a mere qualitative tool enabling us to distinguish between but lacking the ability to quantify the physiological and nonphysiological states of the body. Such a capability would, however, facilitate solving the problem of medical quantification, whose presence currently manifests itself within the entire healthcare system. Methods A generally applicable method to enhance captured 3D spatial data carrying temperature-related information is presented; in this context, all equations required for other data fusions are derived. The method can be utilized for high-density point clouds or detailed meshes at a high resolution but is conveniently usable in large objects with sparse points. Results The benefits of the approach are experimentally demonstrated on 3D thermal scans of injured subjects. We obtained diagnostic information inaccessible via traditional methods. Conclusion Using a 3D model and thermal image data fusion allows the quantification of inflammation, facilitating more precise injury and illness diagnostics or monitoring. The technique offers a wide application potential in medicine and multiple technological domains, including electrical and mechanical engineering. PMID:29250306

  9. The potential of 3D techniques for cultural heritage object documentation

    NASA Astrophysics Data System (ADS)

    Bitelli, Gabriele; Girelli, Valentina A.; Remondino, Fabio; Vittuari, Luca

    2007-01-01

    The generation of 3D models of objects has become an important research point in many fields of application like industrial inspection, robotics, navigation and body scanning. Recently the techniques for generating photo-textured 3D digital models have interested also the field of Cultural Heritage, due to their capability to combine high precision metrical information with a qualitative and photographic description of the objects. In fact this kind of product is a fundamental support for documentation, studying and restoration of works of art, until a production of replicas by fast prototyping techniques. Close-range photogrammetric techniques are nowadays more and more frequently used for the generation of precise 3D models. With the advent of automated procedures and fully digital products in the 1990s, it has become easier to use and cheaper, and nowadays a wide range of commercial software is available to calibrate, orient and reconstruct objects from images. This paper presents the complete process for the derivation of a photorealistic 3D model of an important basalt stela (about 70 x 60 x 25 cm) discovered in the archaeological site of Tilmen Höyük, in Turkey, dating back to 2nd mill. BC. We will report the modeling performed using passive and active sensors and the comparison of the achieved results.

  10. 3D geospatial visualizations: Animation and motion effects on spatial objects

    NASA Astrophysics Data System (ADS)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  11. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  12. The effect of background and illumination on color identification of real, 3D objects.

    PubMed

    Allred, Sarah R; Olkkonen, Maria

    2013-01-01

    For the surface reflectance of an object to be a useful cue to object identity, judgments of its color should remain stable across changes in the object's environment. In 2D scenes, there is general consensus that color judgments are much more stable across illumination changes than background changes. Here we investigate whether these findings generalize to real 3D objects. Observers made color matches to cubes as we independently varied both the illumination impinging on the cube and the 3D background of the cube. As in 2D scenes, we found relatively high but imperfect stability of color judgments under an illuminant shift. In contrast to 2D scenes, we found that background had little effect on average color judgments. In addition, variability of color judgments was increased by an illuminant shift and decreased by embedding the cube within a background. Taken together, these results suggest that in real 3D scenes with ample cues to object segregation, the addition of a background may improve stability of color identification.

  13. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  14. Automatic image database generation from CAD for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.

    1993-06-01

    The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.

  15. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  16. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  17. Does scene context always facilitate retrieval of visual object representations?

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2011-04-01

    An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).

  18. Visual shape perception as Bayesian inference of 3D object-centered shape representations.

    PubMed

    Erdogan, Goker; Jacobs, Robert A

    2017-11-01

    Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Object-Oriented Approach for 3d Archaeological Documentation

    NASA Astrophysics Data System (ADS)

    Valente, R.; Brumana, R.; Oreni, D.; Banfi, F.; Barazzetti, L.; Previtali, M.

    2017-08-01

    Documentation on archaeological fieldworks needs to be accurate and time-effective. Many features unveiled during excavations can be recorded just once, since the archaeological workflow physically removes most of the stratigraphic elements. Some of them have peculiar characteristics which make them hardly recognizable as objects and prevent a full 3D documentation. The paper presents a suitable feature-based method to carry on archaeological documentation with a three-dimensional approach, tested on the archaeological site of S. Calocero in Albenga (Italy). The method is based on one hand on the use of structure from motion techniques for on-site recording and 3D Modelling to represent the three-dimensional complexity of stratigraphy. The entire documentation workflow is carried out through digital tools, assuring better accuracy and interoperability. Outputs can be used in GIS to perform spatial analysis; moreover, a more effective dissemination of fieldworks results can be assured with the spreading of datasets and other information through web-services.

  20. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  1. Identification and detection of simple 3D objects with severely blurred vision.

    PubMed

    Kallie, Christopher S; Legge, Gordon E; Yu, Deyue

    2012-12-05

    Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10-24 feet, or 3.05-7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2-6 feet, or 0.61-1.83 m), and color (gray and white). Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed.

  2. Producing a Linear Laser System for 3d Modelimg of Small Objects

    NASA Astrophysics Data System (ADS)

    Amini, A. Sh.; Mozaffar, M. H.

    2012-07-01

    Today, three dimensional modeling of objects is considered in many applications such as documentation of ancient heritage, quality control, reverse engineering and animation In this regard, there are a variety of methods for producing three-dimensional models. In this paper, a 3D modeling system is developed based on photogrammetry method using image processing and laser line extraction from images. In this method the laser beam profile is radiated on the body of the object and with video image acquisition, and extraction of laser line from the frames, three-dimensional coordinates of the objects can be achieved. In this regard, first the design and implementation of hardware, including cameras and laser systems was conducted. Afterwards, the system was calibrated. Finally, the software of the system was implemented for three dimensional data extraction. The system was investigated for modeling a number of objects. The results showed that the system can provide benefits such as low cost, appropriate speed and acceptable accuracy in 3D modeling of objects.

  3. A Minimum Path Algorithm Among 3D-Polyhedral Objects

    NASA Astrophysics Data System (ADS)

    Yeltekin, Aysin

    1989-03-01

    In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.

  4. Identification of superficial defects in reconstructed 3D objects using phase-shifting fringe projection

    NASA Astrophysics Data System (ADS)

    Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.

    2016-09-01

    3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.

  5. Lagrangian 3D tracking of fluorescent microscopic objects in motion

    NASA Astrophysics Data System (ADS)

    Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  6. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  7. Multiple-3D-object secure information system based on phase shifting method and single interference.

    PubMed

    Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam

    2016-05-20

    We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.

  8. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  9. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  10. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  11. High-purity 3D nano-objects grown by focused-electron-beam induced deposition.

    PubMed

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M; Koopmans, Bert

    2016-09-02

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  12. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  13. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  14. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  15. 3D shape measurement of moving object with FFT-based spatial matching

    NASA Astrophysics Data System (ADS)

    Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun

    2018-03-01

    This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.

  16. Detection and Purging of Specular Reflective and Transparent Object Influences in 3d Range Measurements

    NASA Astrophysics Data System (ADS)

    Koch, R.; May, S.; Nüchter, A.

    2017-02-01

    3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It

  17. A Quality Assurance Method that Utilizes 3D Dosimetry and Facilitates Clinical Interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldham, Mark, E-mail: mark.oldham@duke.edu; Thomas, Andrew; O'Daniel, Jennifer

    2012-10-01

    Purpose: To demonstrate a new three-dimensional (3D) quality assurance (QA) method that provides comprehensive dosimetry verification and facilitates evaluation of the clinical significance of QA data acquired in a phantom. Also to apply the method to investigate the dosimetric efficacy of base-of-skull (BOS) intensity-modulated radiotherapy (IMRT) treatment. Methods and Materials: Two types of IMRT QA verification plans were created for 6 patients who received BOS IMRT. The first plan enabled conventional 2D planar IMRT QA using the Varian portal dosimetry system. The second plan enabled 3D verification using an anthropomorphic head phantom. In the latter, the 3D dose distribution wasmore » measured using the DLOS/Presage dosimetry system (DLOS = Duke Large-field-of-view Optical-CT System, Presage Heuris Pharma, Skillman, NJ), which yielded isotropic 2-mm data throughout the treated volume. In a novel step, measured 3D dose distributions were transformed back to the patient's CT to enable calculation of dose-volume histograms (DVH) and dose overlays. Measured and planned patient DVHs were compared to investigate clinical significance. Results: Close agreement between measured and calculated dose distributions was observed for all 6 cases. For gamma criteria of 3%, 2 mm, the mean passing rate for portal dosimetry was 96.8% (range, 92.0%-98.9%), compared to 94.9% (range, 90.1%-98.9%) for 3D. There was no clear correlation between 2D and 3D passing rates. Planned and measured dose distributions were evaluated on the patient's anatomy, using DVH and dose overlays. Minor deviations were detected, and the clinical significance of these are presented and discussed. Conclusions: Two advantages accrue to the methods presented here. First, treatment accuracy is evaluated throughout the whole treated volume, yielding comprehensive verification. Second, the clinical significance of any deviations can be assessed through the generation of DVH curves and dose overlays on the

  18. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  19. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  20. An approach to detecting deliberately introduced defects and micro-defects in 3D printed objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-05-01

    In prior work, Zeltmann, et al. demonstrated the negative impact that can be created by defects of various sizes in 3D printed objects. These defects may make the object unsuitable for its application or even present a hazard, if the object is being used for a safety-critical application. With the uses of 3D printing proliferating and consumer access to printers increasing, the desire of a nefarious individual or group to subvert the desired printing quality and safety attributes of a printer or printed object must be considered. Several different approaches to subversion may exist. Attackers may physically impair the functionality of the printer or launch a cyber-attack. Detecting introduced defects, from either attack, is critical to maintaining public trust in 3D printed objects and the technology. This paper presents an alternate approach. It applies a quality assurance technology based on visible light sensing to this challenge and assesses its capability for detecting introduced defects of multiple sizes.

  1. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  2. SUMO Modification Stabilizes Enterovirus 71 Polymerase 3D To Facilitate Viral Replication

    PubMed Central

    Liu, Yan; Shu, Bo; Meng, Jin; Zhang, Yuan; Zheng, Caishang; Ke, Xianliang; Gong, Peng; Hu, Qinxue; Wang, Hanzhong

    2016-01-01

    ABSTRACT Accumulating evidence suggests that viruses hijack cellular proteins to circumvent the host immune system. Ubiquitination and SUMOylation are extensively studied posttranslational modifications (PTMs) that play critical roles in diverse biological processes. Cross talk between ubiquitination and SUMOylation of both host and viral proteins has been reported to result in distinct functional consequences. Enterovirus 71 (EV71), an RNA virus belonging to the family Picornaviridae, is a common cause of hand, foot, and mouth disease. Little is known concerning how host PTM systems interact with enteroviruses. Here, we demonstrate that the 3D protein, an RNA-dependent RNA polymerase (RdRp) of EV71, is modified by small ubiquitin-like modifier 1 (SUMO-1) both during infection and in vitro. Residues K159 and L150/D151/L152 were responsible for 3D SUMOylation as determined by bioinformatics prediction combined with site-directed mutagenesis. Also, primer-dependent polymerase assays indicated that mutation of SUMOylation sites impaired 3D polymerase activity and virus replication. Moreover, 3D is ubiquitinated in a SUMO-dependent manner, and SUMOylation is crucial for 3D stability, which may be due to the interplay between the two PTMs. Importantly, increasing the level of SUMO-1 in EV71-infected cells augmented the SUMOylation and ubiquitination levels of 3D, leading to enhanced replication of EV71. These results together suggested that SUMO and ubiquitin cooperatively regulated EV71 infection, either by SUMO-ubiquitin hybrid chains or by ubiquitin conjugating to the exposed lysine residue through SUMOylation. Our study provides new insight into how a virus utilizes cellular pathways to facilitate its replication. IMPORTANCE Infection with enterovirus 71 (EV71) often causes neurological diseases in children, and EV71 is responsible for the majority of fatalities. Based on a better understanding of interplay between virus and host cell, antiviral drugs against

  3. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  4. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  5. Surgical planning and manual image fusion based on 3D model facilitate laparoscopic partial nephrectomy for intrarenal tumors.

    PubMed

    Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao

    2014-12-01

    Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.

  6. 3D Printing: Print the future of ophthalmology.

    PubMed

    Huang, Wenbin; Zhang, Xiulan

    2014-08-26

    The three-dimensional (3D) printer is a new technology that creates physical objects from digital files. Recent technological advances in 3D printing have resulted in increased use of this technology in the medical field, where it is beginning to revolutionize medical and surgical possibilities. It is already providing medicine with powerful tools that facilitate education, surgical planning, and organ transplantation research. A good understanding of this technology will be beneficial to ophthalmologists. The potential applications of 3D printing in ophthalmology, both current and future, are explored in this article. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  7. H3K9me3 demethylase Kdm4d facilitates the formation of pre-initiative complex and regulates DNA replication

    PubMed Central

    Wu, Rentian; Wang, Zhiquan; Zhang, Honglian; Gan, Haiyun; Zhang, Zhiguo

    2017-01-01

    DNA replication is tightly regulated to occur once and only once per cell cycle. How chromatin, the physiological substrate of DNA replication machinery, regulates DNA replication remains largely unknown. Here we show that histone H3 lysine 9 demethylase Kdm4d regulates DNA replication in eukaryotic cells. Depletion of Kdm4d results in defects in DNA replication, which can be rescued by the expression of H3K9M, a histone H3 mutant transgene that reverses the effect of Kdm4d on H3K9 methylation. Kdm4d interacts with replication proteins, and its recruitment to DNA replication origins depends on the two pre-replicative complex components (origin recognition complex [ORC] and minichromosome maintenance [MCM] complex). Depletion of Kdm4d impairs the recruitment of Cdc45, proliferating cell nuclear antigen (PCNA), and polymerase δ, but not ORC and MCM proteins. These results demonstrate a novel mechanism by which Kdm4d regulates DNA replication by reducing the H3K9me3 level to facilitate formation of pre-initiative complex. PMID:27679476

  8. Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views

    DTIC Science & Technology

    1988-08-31

    technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming

  9. Recognizing 3-D Objects Using 2-D Images

    DTIC Science & Technology

    1993-05-01

    also depends on models that contain significant numbers of viewpoint-invariant features, such as parallelograms. Biederman [9] built on Lowe’s work to...objects. Biederman suggests that we recognize images of objects by dividing the image into a few parts, called geons. Each geon is described by the...are also described with a few view-invariant features. Together, these provide a set of features 1.3. STRATEG;IES FOR INDEXING 25 which Biederman

  10. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  11. 3D high- and super-resolution imaging using single-objective SPIM.

    PubMed

    Galland, Remi; Grenci, Gianluca; Aravind, Ajay; Viasnoff, Virgile; Studer, Vincent; Sibarita, Jean-Baptiste

    2015-07-01

    Single-objective selective-plane illumination microscopy (soSPIM) is achieved with micromirrored cavities combined with a laser beam-steering unit installed on a standard inverted microscope. The illumination and detection are done through the same objective. soSPIM can be used with standard sample preparations and features high background rejection and efficient photon collection, allowing for 3D single-molecule-based super-resolution imaging of whole cells or cell aggregates. Using larger mirrors enabled us to broaden the capabilities of our system to image Drosophila embryos.

  12. Effective 3-D shape discrimination survives retinal blur.

    PubMed

    Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M

    2010-08-01

    A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.

  13. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  14. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  15. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  16. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  17. Spatial and symbolic queries for 3D image data

    NASA Astrophysics Data System (ADS)

    Benson, Daniel C.; Zick, Gregory L.

    1992-04-01

    We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.

  18. Printing of metallic 3D micro-objects by laser induced forward transfer.

    PubMed

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed.

  19. 3D printing and IoT for personalized everyday objects in nursing and healthcare

    NASA Astrophysics Data System (ADS)

    Asano, Yoshihiro; Tanaka, Hiroya; Miyagawa, Shoko; Yoshioka, Junki

    2017-04-01

    Today, application of 3D printing technology for medical use is getting popular. It strongly helps to make complicated shape of body parts with functional materials. We can complement injured, weakened or lacked parts, and recover original shape and functions. However, these cases are mainly focusing on the symptom itself, not on everyday lives of patients. With life span extending, many of us will live a life with chronic disease for long time. Then, we should think about our living environment more carefully. For example, we can make personalized everyday objects and support their body and mind. Therefore, we use 3D printing for making everyday objects from nursing / healthcare perspective. In this project, we have 2 main research questions. The first one is how to make objects which patients really require. We invited many kinds of people such as engineer, nurses and patients to our research activity. Nurses can find patient's real demands firstly, and engineers support them with rapid prototyping. Finally, we found the best collaboration methodologies among nurses, engineers and patients. The second question is how to trace and evaluate usages of created objects. Apparently, it's difficult to monitor user's activity for a long time. So we're developing the IoT sensing system, which monitor activities remotely. We enclose a data logger which can lasts about one month with 3D printed objects. After one month, we can pick up the data from objects and understand how it has been used.

  20. H3K9me3 demethylase Kdm4d facilitates the formation of pre-initiative complex and regulates DNA replication.

    PubMed

    Wu, Rentian; Wang, Zhiquan; Zhang, Honglian; Gan, Haiyun; Zhang, Zhiguo

    2017-01-09

    DNA replication is tightly regulated to occur once and only once per cell cycle. How chromatin, the physiological substrate of DNA replication machinery, regulates DNA replication remains largely unknown. Here we show that histone H3 lysine 9 demethylase Kdm4d regulates DNA replication in eukaryotic cells. Depletion of Kdm4d results in defects in DNA replication, which can be rescued by the expression of H3K9M, a histone H3 mutant transgene that reverses the effect of Kdm4d on H3K9 methylation. Kdm4d interacts with replication proteins, and its recruitment to DNA replication origins depends on the two pre-replicative complex components (origin recognition complex [ORC] and minichromosome maintenance [MCM] complex). Depletion of Kdm4d impairs the recruitment of Cdc45, proliferating cell nuclear antigen (PCNA), and polymerase δ, but not ORC and MCM proteins. These results demonstrate a novel mechanism by which Kdm4d regulates DNA replication by reducing the H3K9me3 level to facilitate formation of pre-initiative complex. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Towards a Decision Support Tool for 3d Visualisation: Application to Selectivity Purpose of Single Object in a 3d City Scene

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Pouliot, J.; Poux, F.; Hallot, P.; De Rudder, L.; Billen, R.

    2017-10-01

    This paper deals with the establishment of a comprehensive methodological framework that defines 3D visualisation rules and its application in a decision support tool. Whilst the use of 3D models grows in many application fields, their visualisation remains challenging from the point of view of mapping and rendering aspects to be applied to suitability support the decision making process. Indeed, there exists a great number of 3D visualisation techniques but as far as we know, a decision support tool that facilitates the production of an efficient 3D visualisation is still missing. This is why a comprehensive methodological framework is proposed in order to build decision tables for specific data, tasks and contexts. Based on the second-order logic formalism, we define a set of functions and propositions among and between two collections of entities: on one hand static retinal variables (hue, size, shape…) and 3D environment parameters (directional lighting, shadow, haze…) and on the other hand their effect(s) regarding specific visual tasks. It enables to define 3D visualisation rules according to four categories: consequence, compatibility, potential incompatibility and incompatibility. In this paper, the application of the methodological framework is demonstrated for an urban visualisation at high density considering a specific set of entities. On the basis of our analysis and the results of many studies conducted in the 3D semiotics, which refers to the study of symbols and how they relay information, the truth values of propositions are determined. 3D visualisation rules are then extracted for the considered context and set of entities and are presented into a decision table with a colour coding. Finally, the decision table is implemented into a plugin developed with three.js, a cross-browser JavaScript library. The plugin consists of a sidebar and warning windows that help the designer in the use of a set of static retinal variables and 3D environment

  2. Full-field 3D shape measurement of specular object having discontinuous surfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Huang, Shujun; Gao, Nan; Gao, Feng; Jiang, Xiangqian

    2017-06-01

    This paper presents a novel Phase Measuring Deflectometry (PMD) method to measure specular objects having discontinuous surfaces. A mathematical model is established to directly relate the absolute phase and depth, instead of the phase and gradient. Based on the model, a hardware measuring system has been set up, which consists of a precise translating stage, a projector, a diffuser and a camera. The stage locates the projector and the diffuser together to a known position during measurement. By using the model-based and machine vision methods, system calibration is accomplished to provide the required parameters and conditions. The verification tests are given to evaluate the effectiveness of the developed system. 3D (Three-Dimensional) shapes of a concave mirror and a monolithic multi-mirror array having multiple specular surfaces have been measured. Experimental results show that the proposed method can obtain 3D shape of specular objects having discontinuous surfaces effectively

  3. True-3D Accentuating of Grids and Streets in Urban Topographic Maps Enhances Human Object Location Memory

    PubMed Central

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  4. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  5. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences.

  6. Active learning in the lecture theatre using 3D printed objects.

    PubMed

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  7. Active learning in the lecture theatre using 3D printed objects

    PubMed Central

    Smith, David P.

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  8. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry

    NASA Astrophysics Data System (ADS)

    Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina

    2013-12-01

    The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na

  9. Action relations facilitate the identification of briefly-presented objects.

    PubMed

    Roberts, Katherine L; Humphreys, Glyn W

    2011-02-01

    The link between perception and action allows us to interact fluently with the world. Objects which 'afford' an action elicit a visuomotor response, facilitating compatible responses. In addition, positioning objects to interact with one another appears to facilitate grouping, indicated by patients with extinction being better able to identify interacting objects (e.g. a corkscrew going towards the top of a wine bottle) than the same objects when positioned incorrectly for action (Riddoch, Humphreys, Edwards, Baker, & Willson, Nature Neuroscience, 6, 82-89, 2003). Here, we investigate the effect of action relations on the perception of normal participants. We found improved identification of briefly-presented objects when in correct versus incorrect co-locations for action. For the object that would be 'active' in the interaction (the corkscrew), this improvement was enhanced when it was oriented for use by the viewer's dominant hand. In contrast, the position-related benefit for the 'passive' object was stronger when the objects formed an action-related pair (corkscrew and bottle) compared with an unrelated pair (corkscrew and candle), and it was reduced when spatial cues disrupted grouping between the objects. We propose that these results indicate two separate effects of action relations on normal perception: a visuomotor response to objects which strongly afford an action; and a grouping effect between objects which form action-related pairs.

  10. Approximation of a foreign object using x-rays, reference photographs and 3D reconstruction techniques.

    PubMed

    Briggs, Matt; Shanmugam, Mohan

    2013-12-01

    This case study describes how a 3D animation was created to approximate the depth and angle of a foreign object (metal bar) that had become embedded into a patient's head. A pre-operative CT scan was not available as the patient could not fit though the CT scanner, therefore a post surgical CT scan, x-ray and photographic images were used. A surface render was made of the skull and imported into Blender (a 3D animation application). The metal bar was not available, however images of a similar object that was retrieved from the scene by the ambulance crew were used to recreate a 3D model. The x-ray images were then imported into Blender and used as background images in order to align the skull reconstruction and metal bar at the correct depth/angle. A 3D animation was then created to fully illustrate the angle and depth of the iron bar in the skull.

  11. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  12. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  13. A Study on AR 3D Objects Shading Method Using Electronic Compass Sensor

    NASA Astrophysics Data System (ADS)

    Jung, Sungmo; Kim, Seoksoo

    More effective communications can be offered to users by applying NPR (Non-Photorealistic Rendering) methods to 3D graphics. Thus, there has been much research on how to apply NPR to mobile contents. However, previous studies only propose cartoon rendering for pre-treatment with no consideration for directions of light in the surrounding environment. In this study, therefore, ECS(Electronic Compass Sensor) is applied to AR 3D objects shading in order to define directions of light as per time slots for assimilation with the surrounding environment.

  14. Reconstruction of 3d Objects of Assets and Facilities by Using Benchmark Points

    NASA Astrophysics Data System (ADS)

    Baig, S. U.; Rahman, A. A.

    2013-08-01

    Acquiring and modeling 3D geo-data of building assets and facility objects is one of the challenges. A number of methods and technologies are being utilized for this purpose. Total station, GPS, photogrammetric and terrestrial laser scanning are few of these technologies. In this paper, points commonly shared by potential facades of assets and facilities modeled from point clouds are identified. These points are useful for modeling process to reconstruct 3D models of assets and facilities stored to be used for management purposes. These models are segmented through different planes to produce accurate 2D plans. This novel method improves the efficiency and quality of construction of models of assets and facilities with the aim utilize in 3D management projects such as maintenance of buildings or group of items that need to be replaced, or renovated for new services.

  15. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  16. Objective and subjective comparison of standard 2-D and fully 3-D reconstructed data on a PET/CT system.

    PubMed

    Strobel, Klaus; Rüdy, Matthias; Treyer, Valerie; Veit-Haibach, Patrick; Burger, Cyrill; Hany, Thomas F

    2007-07-01

    The relative advantage of fully 3-D versus 2-D mode for whole-body imaging is currently the focus of considerable expert debate. The nature of 3-D PET acquisition for FDG PET/CT theoretically allows a shorter scan time and improved efficiency of FDG use than in the standard 2-D acquisition. We therefore objectively and subjectively compared standard 2-D and fully 3-D reconstructed data for FDG PET/CT on a research PET/CT system. In a total of 36 patients (mean 58.9 years, range 17.3-78.9 years; 21 male, 15 female) referred for known or suspected malignancy, FDG PET/CT was performed using a research PET/CT system with advanced detector technology with improved sensitivity and spatial resolution. After 45 min uptake, a low-dose CT (40 mAs) from head to thigh was performed followed by 2-D PET (emission 3 min per field) and 3-D PET (emission 1.5 min per field) with both seven slices overlap to cover the identical anatomical region. Acquisition time was therefore 50% less (seven fields; 21 min vs. 10.5 min). PET data was acquired in a randomized fashion, so in 50% of the cases 2-D data was acquired first. CT data was used for attenuation correction. 2-D (OSEM) and 3-D PET images were iteratively reconstructed. Subjective analysis of 2-D and 3-D images was performed by two readers in a blinded, randomized fashion evaluating the following criteria: sharpness of organs (liver, chest wall/lung), overall image quality and detectability and dignity of each identified lesion. Objective analysis of PET data was investigated measuring maximum standard uptake value with lean body mass (SUV(max,LBM)) of identified lesions. On average, per patient, the SUV(max) was 7.86 (SD 7.79) for 2-D and 6.96 (SD 5.19) for 3-D. On a lesion basis, the average SUV(max) was 7.65 (SD 7.79) for 2-D and 6.75 (SD 5.89) for 3-D. The absolute difference on a paired t-test of SUV 3-D-2-D based on each measured lesion was significant with an average of -0.956 (P=0.002) and an average of -0.884 on a

  17. Three-Dimensional Interpretation of Sculptural Heritage with Digital and Tangible 3D Printed Replicas

    ERIC Educational Resources Information Center

    Saorin, José Luis; Carbonell-Carrera, Carlos; Cantero, Jorge de la Torre; Meier, Cecile; Aleman, Drago Diaz

    2017-01-01

    Spatial interpretation features as a skill to acquire in the educational curricula. The visualization and interpretation of three-dimensional objects in tactile devices and the possibility of digital manufacturing with 3D printers, offers an opportunity to include replicas of sculptures in teaching and, thus, facilitate the 3D interpretation of…

  18. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    PubMed

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  19. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two

  20. Tailoring bulk mechanical properties of 3D printed objects of polylactic acid varying internal micro-architecture

    NASA Astrophysics Data System (ADS)

    Malinauskas, Mangirdas; Skliutas, Edvinas; Jonušauskas, Linas; Mizeras, Deividas; Šešok, Andžela; Piskarskas, Algis

    2015-05-01

    Herein we present 3D Printing (3DP) fabrication of structures having internal microarchitecture and characterization of their mechanical properties. Depending on the material, geometry and fill factor, the manufactured objects mechanical performance can be tailored from "hard" to "soft." In this work we employ low-cost fused filament fabrication 3D printer enabling point-by-point structuring of poly(lactic acid) (PLA) with~̴400 µm feature spatial resolution. The chosen architectures are defined as woodpiles (BCC, FCC and 60 deg rotating). The period is chosen to be of 1200 µm corresponding to 800 µm pores. The produced objects structural quality is characterized using scanning electron microscope, their mechanical properties such as flexural modulus, elastic modulus and stiffness are evaluated by measured experimentally using universal TIRAtest2300 machine. Within the limitation of the carried out study we show that the mechanical properties of 3D printed objects can be tuned at least 3 times by only changing the woodpile geometry arrangement, yet keeping the same filling factor and periodicity of the logs. Additionally, we demonstrate custom 3D printed µ-fluidic elements which can serve as cheap, biocompatible and environmentally biodegradable platforms for integrated Lab-On-Chip (LOC) devices.

  1. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  2. Refocusing-range and image-quality enhanced optical reconstruction of 3-D objects from integral images using a principal periodic δ-function array

    NASA Astrophysics Data System (ADS)

    Ai, Lingyu; Kim, Eun-Soo

    2018-03-01

    We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.

  3. 3D imaging, 3D printing and 3D virtual planning in endodontics.

    PubMed

    Shah, Pratik; Chong, B S

    2018-03-01

    The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.

  4. 3D imaging with a single-aperture 3-mm objective lens: concept, fabrication, and test

    NASA Astrophysics Data System (ADS)

    Korniski, Ronald; Bae, Sam Y.; Shearn, Michael; Manohara, Harish; Shahinian, Hrayr

    2011-10-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the- shelf (COTS) components including the ones used in the endoscope objective.

  5. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  6. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  7. Defining Simple nD Operations Based on Prosmatic nD Objects

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2016-10-01

    An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.

  8. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  9. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  10. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  11. Modeling and modification of medical 3D objects. The benefit of using a haptic modeling tool.

    PubMed

    Kling-Petersen, T; Rydmark, M

    2000-01-01

    any given amount of smoothing to the object. While the final objects need to be exported for further 3D graphic manipulation, FreeForm addresses one of the most time consuming problems of 3D modeling: modification and creation of non-geometric 3D objects.

  12. Extending 3D city models with legal information

    NASA Astrophysics Data System (ADS)

    Frank, A. U.; Fuhrmann, T.; Navratil, G.

    2012-10-01

    3D city models represent existing physical objects and their topological and functional relations. In everyday life the rights and responsibilities connected to these objects, primarily legally defined rights and obligations but also other socially and culturally established rights, are of importance. The rights and obligations are defined in various laws and it is often difficult to identify the rules applicable for a certain case. The existing 2D cadastres show civil law rights and obligations and plans to extend them to provide information about public law restrictions for land use are in several countries under way. It is tempting to design extensions to the 3D city models to provide information about legal rights in 3D. The paper analyses the different types of information that are needed to reduce conflicts and to facilitate decisions about land use. We identify the role 3D city models augmented with planning information in 3D can play, but do not advocate a general conversion from 2D to 3D for the legal cadastre. Space is not anisotropic and the up/down dimension is practically very different from the two dimensional plane - this difference must be respected when designing spatial information systems. The conclusions are: (1) continue the current regime for ownership of apartments, which is not ownership of a 3D volume, but co-ownership of a building with exclusive use of some rooms; such exclusive use rights could be shown in a 3D city model; (2) ownership of 3D volumes for complex and unusual building situations can be reported in a 3D city model, but are not required everywhere; (3) indicate restrictions for land use and building in 3D city models, with links to the legal sources.

  13. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-11-30

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  14. Influence of georeference for saturated excess overland flow modelling using 3D volumetric soft geo-objects

    NASA Astrophysics Data System (ADS)

    Izham, Mohamad Yusoff; Muhamad Uznir, Ujang; Alias, Abdul Rahman; Ayob, Katimon; Wan Ruslan, Ismail

    2011-04-01

    Existing 2D data structures are often insufficient for analysing the dynamism of saturation excess overland flow (SEOF) within a basin. Moreover, all stream networks and soil surface structures in GIS must be preserved within appropriate projection plane fitting techniques known as georeferencing. Inclusion of 3D volumetric structure of the current soft geo-objects simulation model would offer a substantial effort towards representing 3D soft geo-objects of SEOF dynamically within a basin by visualising saturated flow and overland flow volume. This research attempts to visualise the influence of a georeference system towards the dynamism of overland flow coverage and total overland flow volume generated from the SEOF process using VSG data structure. The data structure is driven by Green-Ampt methods and the Topographic Wetness Index (TWI). VSGs are analysed by focusing on spatial object preservation techniques of the conformal-based Malaysian Rectified Skew Orthomorphic (MRSO) and the equidistant-based Cassini-Soldner projection plane under the existing geodetic Malaysian Revised Triangulation 1948 (MRT48) and the newly implemented Geocentric Datum for Malaysia (GDM2000) datum. The simulated result visualises deformation of SEOF coverage under different georeference systems via its projection planes, which delineate dissimilar computation of SEOF areas and overland flow volumes. The integration of Georeference, 3D GIS and the saturation excess mechanism provides unifying evidence towards successful landslide and flood disaster management through envisioning the streamflow generating process (mainly SEOF) in a 3D environment.

  15. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  16. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  17. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  18. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  19. 3D Bioprinting of Tissue/Organ Models.

    PubMed

    Pati, Falguni; Gantelius, Jesper; Svahn, Helene Andersson

    2016-04-04

    In vitro tissue/organ models are useful platforms that can facilitate systematic, repetitive, and quantitative investigations of drugs/chemicals. The primary objective when developing tissue/organ models is to reproduce physiologically relevant functions that typically require complex culture systems. Bioprinting offers exciting prospects for constructing 3D tissue/organ models, as it enables the reproducible, automated production of complex living tissues. Bioprinted tissues/organs may prove useful for screening novel compounds or predicting toxicity, as the spatial and chemical complexity inherent to native tissues/organs can be recreated. In this Review, we highlight the importance of developing 3D in vitro tissue/organ models by 3D bioprinting techniques, characterization of these models for evaluating their resemblance to native tissue, and their application in the prioritization of lead candidates, toxicity testing, and as disease/tumor models. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. 3D printing cybersecurity: detecting and preventing attacks that seek to weaken a printed object by changing fill level

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-06-01

    Prior work by Zeltmann, et al. has demonstrated the impact of small defects and other irregularities on the structural integrity of 3D printed objects. It posited that such defects could be introduced intentionally. The current work looks at the impact of changing the fill level on object structural integrity. It considers whether the existence of an appropriate level of fill can be determined through visible light imagery-based assessment of a 3D printed object. A technique for assessing the quality and sufficiency of quantity of 3D printed fill material is presented. It is assessed experimentally and results are presented and analyzed.

  1. Registration of untypical 3D objects in Polish cadastre - do we need 3D cadastre? / Rejestracja nietypowych obiektów 3D w polskim katastrze - czy istnieje potrzeba wdrożenia katastru 3D?

    NASA Astrophysics Data System (ADS)

    Marcin, Karabin

    2012-11-01

    Polish cadastral system consists of two registers: cadastre and land register. The cadastre register data on cadastral objects (land, buildings and premises) in particular location (in a two-dimensional coordinate system) and their attributes as well as data about the owners. The land register contains data concerned ownerships and other rights to the property. Registration of a land parcel without spatial objects located on the surface is not problematic. Registration of buildings and premises in typical cases is not a problem either. The situation becomes more complicated in cases of multiple use of space above the parcel and with more complex construction of the buildings. The paper presents rules concerning the registration of various untypical 3D objects located within the city of Warsaw. The analysis of the data concerning those objects registered in the cadastre and land register is presented in the paper. And this is the next part of the author's detailed research. The aim of this paper is to answer the question if we really need 3D cadastre in Poland. Polski system katastralny składa się z dwóch rejestrów: ewidencji gruntów i budynków (katastru nieruchomosci) oraz ksiąg wieczystych. W ewidencji gruntów i budynków (katastrze nieruchomości) rejestrowane są dane o położeniu (w dwuwymiarowym układzie współrzędnych), atrybuty oraz dane o właścicielach obiektów katastralnych (działek, budynków i lokali), w księgach wieczystych oprócz danych właścicielskich, inne prawa do nieruchomości. Rejestracja działki bez obiektów przestrzennych położonych na jej powierzchni nie stanowi problemu. Także rejestracja budynków i lokali w typowych przypadkach nie stanowi trudności. Sytuacja staje się bardziej skomplikowana w przypadku wielokrotnego użytkowania przestrzeni powyzej lub poniżej powierzchni działki oraz w przypadku budynków o złożonej konstrukcji. W artykule przedstawiono zasady związane z rejestracją nietypowych obiektów 3

  2. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  3. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.

    PubMed

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-08-27

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.

  4. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes

    PubMed Central

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-01-01

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656

  5. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  6. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  7. 3D imaging and wavefront sensing with a plenoptic objective

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  8. Non-destructive 3D shape measurement of transparent and black objects with thermal fringes

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2016-05-01

    Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.

  9. Spatially rearranged object parts can facilitate perception of intact whole objects.

    PubMed

    Cacciamani, Laura; Ayars, Alisabeth A; Peterson, Mary A

    2014-01-01

    The familiarity of an object depends on the spatial arrangement of its parts; when the parts are spatially rearranged, they form a novel, unrecognizable configuration. Yet the same collection of parts comprises both the familiar and novel configuration. Is it possible that the collection of familiar parts activates a representation of the intact familiar configuration even when they are spatially rearranged? We presented novel configurations as primes before test displays that assayed effects on figure-ground perception from memories of intact familiar objects. In our test displays, two equal-area regions shared a central border; one region depicted a portion of a familiar object. Previous research with such displays has shown that participants are more likely to perceive the region depicting a familiar object as the figure and the abutting region as its ground when the familiar object is depicted in its upright orientation rather than upside down. The novel primes comprised either the same or a different collection of parts as the familiar object in the test display (part-rearranged and control primes, respectively). We found that participants were more likely to perceive the familiar region as figure in upright vs. inverted displays following part-rearranged primes but not control primes. Thus, priming with a novel configuration comprising the same familiar parts as the upcoming figure-ground display facilitated orientation-dependent effects of object memories on figure assignment. Similar results were obtained when the spatially rearranged collection of parts was suggested on the groundside of the prime's border, suggesting that familiar parts in novel configurations access the representation of their corresponding intact whole object before figure assignment. These data demonstrate that familiar parts access memories of familiar objects even when they are arranged in a novel configuration.

  10. Spatially rearranged object parts can facilitate perception of intact whole objects

    PubMed Central

    Cacciamani, Laura; Ayars, Alisabeth A.; Peterson, Mary A.

    2014-01-01

    The familiarity of an object depends on the spatial arrangement of its parts; when the parts are spatially rearranged, they form a novel, unrecognizable configuration. Yet the same collection of parts comprises both the familiar and novel configuration. Is it possible that the collection of familiar parts activates a representation of the intact familiar configuration even when they are spatially rearranged? We presented novel configurations as primes before test displays that assayed effects on figure-ground perception from memories of intact familiar objects. In our test displays, two equal-area regions shared a central border; one region depicted a portion of a familiar object. Previous research with such displays has shown that participants are more likely to perceive the region depicting a familiar object as the figure and the abutting region as its ground when the familiar object is depicted in its upright orientation rather than upside down. The novel primes comprised either the same or a different collection of parts as the familiar object in the test display (part-rearranged and control primes, respectively). We found that participants were more likely to perceive the familiar region as figure in upright vs. inverted displays following part-rearranged primes but not control primes. Thus, priming with a novel configuration comprising the same familiar parts as the upcoming figure-ground display facilitated orientation-dependent effects of object memories on figure assignment. Similar results were obtained when the spatially rearranged collection of parts was suggested on the groundside of the prime's border, suggesting that familiar parts in novel configurations access the representation of their corresponding intact whole object before figure assignment. These data demonstrate that familiar parts access memories of familiar objects even when they are arranged in a novel configuration. PMID:24904495

  11. Improvement of quality of 3D printed objects by elimination of microscopic structural defects in fused deposition modeling.

    PubMed

    Gordeev, Evgeniy G; Galushko, Alexey S; Ananikov, Valentine P

    2018-01-01

    Additive manufacturing with fused deposition modeling (FDM) is currently optimized for a wide range of research and commercial applications. The major disadvantage of FDM-created products is their low quality and structural defects (porosity), which impose an obstacle to utilizing them in functional prototyping and direct digital manufacturing of objects intended to contact with gases and liquids. This article describes a simple and efficient approach for assessing the quality of 3D printed objects. Using this approach it was shown that the wall permeability of a printed object depends on its geometric shape and is gradually reduced in a following series: cylinder > cube > pyramid > sphere > cone. Filament feed rate, wall geometry and G-code-defined wall structure were found as primary parameters that influence the quality of 3D-printed products. Optimization of these parameters led to an overall increase in quality and improvement of sealing properties. It was demonstrated that high quality of 3D printed objects can be achieved using routinely available printers and standard filaments.

  12. Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.

    2017-06-01

    A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.

  13. Developmental trends in the facilitation of multisensory objects with distractors

    PubMed Central

    Downing, Harriet C.; Barutchu, Ayla; Crewther, Sheila G.

    2015-01-01

    Sensory integration and the ability to discriminate target objects from distractors are critical to survival, yet the developmental trajectories of these abilities are unknown. This study investigated developmental changes in 9- (n = 18) and 11-year-old (n = 20) children, adolescents (n = 19) and adults (n = 22) using an audiovisual object discrimination task with uni- and multisensory distractors. Reaction times (RTs) were slower with visual/audiovisual distractors, and although all groups demonstrated facilitation of multisensory RTs in these conditions, children's and adolescents' responses corresponded to fewer race model violations than adults', suggesting protracted maturation of multisensory processes. Multisensory facilitation could not be explained by changes in RT variability, suggesting that tests of race model violations may still have theoretical value at least for familiar multisensory stimuli. PMID:25653630

  14. The interactive presentation of 3D information obtained from reconstructed datasets and 3D placement of single histological sections with the 3D portable document format.

    PubMed

    de Boer, Bouke A; Soufan, Alexandre T; Hagoort, Jaco; Mohun, Timothy J; van den Hoff, Maurice J B; Hasman, Arie; Voorbraak, Frans P J M; Moorman, Antoon F M; Ruijter, Jan M

    2011-01-01

    Interpretation of the results of anatomical and embryological studies relies heavily on proper visualization of complex morphogenetic processes and patterns of gene expression in a three-dimensional (3D) context. However, reconstruction of complete 3D datasets is time consuming and often researchers study only a few sections. To help in understanding the resulting 2D data we developed a program (TRACTS) that places such arbitrary histological sections into a high-resolution 3D model of the developing heart. The program places sections correctly, robustly and as precisely as the best of the fits achieved by five morphology experts. Dissemination of 3D data is severely hampered by the 2D medium of print publication. Many insights gained from studying the 3D object are very hard to convey using 2D images and are consequently lost or cannot be verified independently. It is possible to embed 3D objects into a pdf document, which is a format widely used for the distribution of scientific papers. Using the freeware program Adobe Reader to interact with these 3D objects is reasonably straightforward; creating such objects is not. We have developed a protocol that describes, step by step, how 3D objects can be embedded into a pdf document. Both the use of TRACTS and the inclusion of 3D objects in pdf documents can help in the interpretation of 2D and 3D data, and will thus optimize communication on morphological issues in developmental biology.

  15. Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.

    PubMed

    Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-01-01

    This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.

  16. Shaped-Based Recognition of 3D Objects From 2D Projections

    DTIC Science & Technology

    2006-12-01

    functions for a typical minimization by the graduated assignment algorithm. (The solid line is E , which uses the Euclid- ean distances to the nearest...of E and E0 generally decrease during the optimiza- tion process, but they can also rise because of changes in the assignment variables Mjk...m+ 1)× (n+ 1) match matrix M that minimizes the objective function E = mX j=1 nX k=1 Mjk ³ d (T (lj) , l 0 k) 2 − δ2 ´ . (7) M defines the

  17. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    NASA Astrophysics Data System (ADS)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  18. Extraction and classification of 3D objects from volumetric CT data

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  19. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    PubMed

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

  20. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  1. Ball-scale based hierarchical multi-object recognition in 3D medical images

    NASA Astrophysics Data System (ADS)

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  2. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  3. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  4. Optical analysis of nanoparticles via enhanced backscattering facilitated by 3-D photonic nanojets

    NASA Astrophysics Data System (ADS)

    Li, Xu; Chen, Zhigang; Taflove, Allen; Backman, Vadim

    2005-01-01

    We report the phenomenon of ultra-enhanced backscattering of visible light by nanoparticles facilitated by the 3-D photonic nanojet a sub-diffraction light beam appearing at the shadow side of a plane-waveilluminated dielectric microsphere. Our rigorous numerical simulations show that backscattering intensity of nanoparticles can be enhanced up to eight orders of magnitude when locating in the nanojet. As a result, the enhanced backscattering from a nanoparticle with diameter on the order of 10 nm is well above the background signal generated by the dielectric microsphere itself. We also report that nanojet-enhanced backscattering is extremely sensitive to the size of the nanoparticle, permitting in principle resolving sub-nanometer size differences using visible light. Finally, we show how the position of a nanoparticle could be determined with subdiffractional accuracy by recording the angular distribution of the backscattered light. These properties of photonic nanojets promise to make this phenomenon a useful tool for optically detecting, differentiating, and sorting nanoparticles.

  5. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  6. [Three-dimensional 3D modeling: First applications in radioanatomy and interventional radiology under CT guidance].

    PubMed

    Aubry, S; Pousse, A; Sarliève, P; Laborie, L; Delabrousse, E; Kastler, B

    2006-11-01

    To model vertebrae in 3D to improve radioanatomic knowledge of the spine with the vascular and nerve environment and simulate CT-guided interventions. Vertebra acquisitions were made with multidetector CT. We developed segmentation software and specific viewer software using the Delphi programming environment. This segmentation software makes it possible to model 3D high-resolution segments of vertebrae and their environment from multidetector CT acquisitions. Then the specific viewer software provides multiplanar reconstructions of the CT volume and the possibility to select different 3D objects of interest. This software package improves radiologists' radioanatomic knowledge through a new 3D anatomy presentation. Furthermore, the possibility of inserting virtual 3D objects in the volume can simulate CT-guided intervention. The first volumetric radioanatomic software has been born. Furthermore, it simulates CT-guided intervention and consequently has the potential to facilitate learning interventions using CT guidance.

  7. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  8. 3D printing of MRI compatible components: why every MRI research group should have a low-budget 3D printer.

    PubMed

    Herrmann, Karl-Heinz; Gärtner, Clemens; Güllmar, Daniel; Krämer, Martin; Reichenbach, Jürgen R

    2014-10-01

    To evaluate low budget 3D printing technology to create MRI compatible components. A 3D printer is used to create customized MRI compatible components, a loop-coil platform and a multipart mouse fixation. The mouse fixation is custom fit for a dedicated coil and facilitates head fixation with bite bar, anesthetic gas supply and biomonitoring sensors. The mouse fixation was tested in a clinical 3T scanner. All parts were successfully printed and proved MR compatible. Both design and printing were accomplished within a few days and the final print results were functional with well defined details and accurate dimensions (Δ<0.4mm). MR images of the mouse head clearly showed reduced motion artifacts, ghosting and signal loss when using the fixation. We have demonstrated that a low budget 3D printer can be used to quickly progress from a concept to a functional device at very low production cost. While 3D printing technology does impose some restrictions on model geometry, additive printing technology can create objects with complex internal structures that can otherwise not be created by using lathe technology. Thus, we consider a 3D printer a valuable asset for MRI research groups. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Single Quantum Dot with Microlens and 3D-Printed Micro-objective as Integrated Bright Single-Photon Source

    PubMed Central

    2017-01-01

    Integrated single-photon sources with high photon-extraction efficiency are key building blocks for applications in the field of quantum communications. We report on a bright single-photon source realized by on-chip integration of a deterministic quantum dot microlens with a 3D-printed multilens micro-objective. The device concept benefits from a sophisticated combination of in situ 3D electron-beam lithography to realize the quantum dot microlens and 3D femtosecond direct laser writing for creation of the micro-objective. In this way, we obtain a high-quality quantum device with broadband photon-extraction efficiency of (40 ± 4)% and high suppression of multiphoton emission events with g(2)(τ = 0) < 0.02. Our results highlight the opportunities that arise from tailoring the optical properties of quantum emitters using integrated optics with high potential for the further development of plug-and-play fiber-coupled single-photon sources. PMID:28670600

  10. Fourier Domain Iterative Approach to Optical Sectioning of 3d Translucent Objects for Ophthalmology Purposes

    NASA Astrophysics Data System (ADS)

    Razguli, A. V.; Iroshnikov, N. G.; Larichev, A. V.; Romanenko, T. E.; Goncharov, A. S.

    2017-05-01

    In this paper we deal with the problem of optical sectioning. This is a post processing step while investigating of 3D translucent medical objects based on rapid refocusing of the imaging system by the adaptive optics technique. Each image, captured in focal plane, can be represented as the sum of in-focus true section and out-of-focus images of the neighboring sections of the depth that are undesirable in the subsequent reconstruction of 3D object. The problem of optical sectioning under consideration is to elaborate a robust approach capable of obtaining a stack of cross section images purified from such distortions. For a typical sectioning statement arising in ophthalmology we propose a local iterative method in Fourier spectral plane. Compared to the non-local constant parameter selection for the whole spectral domain, the method demonstrates both improved sectioning results and a good level of scalability when implemented on multi-core CPUs.

  11. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  12. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  13. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.

  14. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  15. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate

  16. Novel Three-Dimensional Image Fusion Software to Facilitate Guidance of Complex Cardiac Catheterization : 3D image fusion for interventions in CHD.

    PubMed

    Goreczny, Sebastian; Dryzek, Pawel; Morgan, Gareth J; Lukaszewski, Maciej; Moll, Jadwiga A; Moszura, Tomasz

    2017-08-01

    We report initial experience with novel three-dimensional (3D) image fusion software for guidance of transcatheter interventions in congenital heart disease. Developments in fusion imaging have facilitated the integration of 3D roadmaps from computed tomography or magnetic resonance imaging datasets. The latest software allows live fusion of two-dimensional (2D) fluoroscopy with pre-registered 3D roadmaps. We reviewed all cardiac catheterizations guided with this software (Philips VesselNavigator). Pre-catheterization imaging and catheterization data were collected focusing on fusion of 3D roadmap, intervention guidance, contrast and radiation exposure. From 09/2015 until 06/2016, VesselNavigator was applied in 34 patients for guidance (n = 28) or planning (n = 6) of cardiac catheterization. In all 28 patients successful 2D-3D registration was performed. Bony structures combined with the cardiovascular silhouette were used for fusion in 26 patients (93%), calcifications in 9 (32%), previously implanted devices in 8 (29%) and low-volume contrast injection in 7 patients (25%). Accurate initial 3D roadmap alignment was achieved in 25 patients (89%). Six patients (22%) required realignment during the procedure due to distortion of the anatomy after introduction of stiff equipment. Overall, VesselNavigator was applied successfully in 27 patients (96%) without any complications related to 3D image overlay. VesselNavigator was useful in guidance of nearly all of cardiac catheterizations. The combination of anatomical markers and low-volume contrast injections allowed reliable 2D-3D registration in the vast majority of patients.

  17. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  18. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  19. Human lung fibroblast-derived matrix facilitates vascular morphogenesis in 3D environment and enhances skin wound healing.

    PubMed

    Du, Ping; Suhaeri, Muhammad; Ha, Sang Su; Oh, Seung Ja; Kim, Sang-Heon; Park, Kwideok

    2017-05-01

    challenging due to the difficulty of recapitulating the complex angiogenic extracellular matrix (ECM) environment. Herein, we present a simple and practical method to create an angiogenic 3D environment via incorporation of human lung fibroblast-derived matrix (hFDM) into collagen hydrogel. We found that hFDM offers a significantly improved angiogenic microenvironment for HUVECs on 2D substrates and in 3D construct. A synergistic effect of hFDM and angiogenic growth factors has been well confirmed in 3D condition. The prevascularized 3D collagen constructs also facilitate skin wound healing. We believe that current system should be a convenient and powerful platform in engineering 3D vasculature in vitro, and in delivering cells for therapeutic purposes in vivo. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  20. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  1. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  2. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above

    PubMed Central

    van Doorn, Andrea J.; Wagemans, Johan

    2016-01-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329

  3. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  4. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  5. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  6. GeoGebra 3D from the Perspectives of Elementary Pre-Service Mathematics Teachers Who Are Familiar with a Number of Software Programs

    ERIC Educational Resources Information Center

    Baltaci, Serdal; Yildiz, Avni

    2015-01-01

    Each new version of the GeoGebra dynamic mathematics software goes through updates and innovations. One of these innovations is the GeoGebra 5.0 version. This version aims to facilitate 3D instruction by offering opportunities for students to analyze 3D objects. While scanning the previous studies of GeoGebra 3D, it is seen that they mainly focus…

  7. D Imaging for Museum Artefacts: a Portable Test Object for Heritage and Museum Documentation of Small Objects

    NASA Astrophysics Data System (ADS)

    Hess, M.; Robson, S.

    2012-07-01

    3D colour image data generated for the recording of small museum objects and archaeological finds are highly variable in quality and fitness for purpose. Whilst current technology is capable of extremely high quality outputs, there are currently no common standards or applicable guidelines in either the museum or engineering domain suited to scientific evaluation, understanding and tendering for 3D colour digital data. This paper firstly explains the rationale towards and requirements for 3D digital documentation in museums. Secondly it describes the design process, development and use of a new portable test object suited to sensor evaluation and the provision of user acceptance metrics. The test object is specifically designed for museums and heritage institutions and includes known surface and geometric properties which support quantitative and comparative imaging on different systems. The development for a supporting protocol will allow object reference data to be included in the data processing workflow with specific reference to conservation and curation.

  8. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  9. Teaching or Facilitating Learning? Selecting the Optimal Approach for Your Educational Objectives and Audience

    ERIC Educational Resources Information Center

    Wise, Dena

    2017-01-01

    Both teaching and facilitation are effective instructional techniques, but each is appropriate for unique educational objectives and scenarios. This article briefly distinguishes between teaching and facilitative techniques and provides guidelines for choosing the better method for a particular educational scenario.

  10. Full-parallax 3D display from stereo-hybrid 3D camera system

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  11. Objective Assessment and Design Improvement of a Staring, Sparse Transducer Array by the Spatial Crosstalk Matrix for 3D Photoacoustic Tomography

    PubMed Central

    Kosik, Ivan; Raess, Avery

    2015-01-01

    Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization. PMID:25875177

  12. 2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.

    PubMed

    Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J

    2010-03-29

    A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.

  13. Automatic segmentation of low-visibility moving objects through energy analyis of the local 3D spectrum

    NASA Astrophysics Data System (ADS)

    Nestares, Oscar; Miravet, Carlos; Santamaria, Javier; Fonolla Navarro, Rafael

    1999-05-01

    Automatic object segmentation in highly noisy image sequences, composed by a translating object over a background having a different motion, is achieved through joint motion-texture analysis. Local motion and/or texture is characterized by the energy of the local spatio-temporal spectrum, as different textures undergoing different translational motions display distinctive features in their 3D (x,y,t) spectra. Measurements of local spectrum energy are obtained using a bank of directional 3rd order Gaussian derivative filters in a multiresolution pyramid in space- time (10 directions, 3 resolution levels). These 30 energy measurements form a feature vector describing texture-motion for every pixel in the sequence. To improve discrimination capability and reduce computational cost, we automatically select those 4 features (channels) that best discriminate object from background, under the assumptions that the object is smaller than the background and has a different velocity or texture. In this way we reject features irrelevant or dominated by noise, that could yield wrong segmentation results. This method has been successfully applied to sequences with extremely low visibility and for objects that are even invisible for the eye in absence of motion.

  14. Intracardiac echo-facilitated 3D electroanatomical mapping of ventricular arrhythmias from the papillary muscles: assessing the 'fourth dimension' during ablation.

    PubMed

    Proietti, Riccardo; Rivera, Santiago; Dussault, Charles; Essebag, Vidal; Bernier, Martin L; Ayala-Paredes, Felix; Badra-Verdu, Mariano; Roux, Jean-François

    2017-01-01

    Ventricular arrhythmias (VA) originating from a papillary muscle (PM) have recently been described as a distinct clinical entity with peculiar features that make its treatment with catheter ablation challenging. Here, we report our experience using an intracardiac echo-facilitated 3D electroanatomical mapping approach in a case series of patients undergoing ablation for PM VA. Sixteen patients who underwent catheter ablation for ventricular tachycardia (VT) or symptomatic premature ventricular contractions originating from left ventricular PMs were included in the study. A total of 24 procedures (mean 1.5 per patient) were performed: 15 using a retrograde aortic approach and 9 using a transseptal approach. Integrated intracardiac ultrasound for 3D electroanatomical mapping was used in 15 of the 24 procedures. The posteromedial PM was the most frequent culprit for the clinical arrhythmia, and the body was the part of the PM most likely to be the successful site for ablation. The site of ablation was identified based on the best pace map matching the clinical arrhythmia and the site of earliest the activation. At a mean follow-up of 10.5 ± 7 months, only two patients had recurrent arrhythmias following a repeat ablation procedure. An echo-facilitated 3D electroanatomical mapping allows for real-time creation of precise geometries of cardiac chambers and endocavitary structures. This is useful during procedures such as catheter ablation of VAs originating from PMs, which require detailed representation of anatomical landmarks. Routine adoption of this technique should be considered to improve outcomes of PM VA ablation. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  15. Object Tracking and Target Reacquisition Based on 3-D Range Data for Moving Vehicles

    PubMed Central

    Lee, Jehoon; Lankton, Shawn; Tannenbaum, Allen

    2013-01-01

    In this paper, we propose an approach for tracking an object of interest based on 3-D range data. We employ particle filtering and active contours to simultaneously estimate the global motion of the object and its local deformations. The proposed algorithm takes advantage of range information to deal with the challenging (but common) situation in which the tracked object disappears from the image domain entirely and reappears later. To cope with this problem, a method based on principle component analysis (PCA) of shape information is proposed. In the proposed method, if the target disappears out of frame, shape similarity energy is used to detect target candidates that match a template shape learned online from previously observed frames. Thus, we require no a priori knowledge of the target’s shape. Experimental results show the practical applicability and robustness of the proposed algorithm in realistic tracking scenarios. PMID:21486717

  16. 3D Printing of Biomolecular Models for Research and Pedagogy

    PubMed Central

    Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel

    2017-01-01

    The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403

  17. From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy

    ERIC Educational Resources Information Center

    Jang, Susan

    2010-01-01

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…

  18. Memory color of natural familiar objects: effects of surface texture and 3-D shape.

    PubMed

    Vurro, Milena; Ling, Yazhu; Hurlbert, Anya C

    2013-06-28

    Natural objects typically possess characteristic contours, chromatic surface textures, and three-dimensional shapes. These diagnostic features aid object recognition, as does memory color, the color most associated in memory with a particular object. Here we aim to determine whether polychromatic surface texture, 3-D shape, and contour diagnosticity improve memory color for familiar objects, separately and in combination. We use solid three-dimensional familiar objects rendered with their natural texture, which participants adjust in real time to match their memory color for the object. We analyze mean, accuracy, and precision of the memory color settings relative to the natural color of the objects under the same conditions. We find that in all conditions, memory colors deviate slightly but significantly in the same direction from the natural color. Surface polychromaticity, shape diagnosticity, and three dimensionality each improve memory color accuracy, relative to uniformly colored, generic, or two-dimensional shapes, respectively. Shape diagnosticity improves the precision of memory color also, and there is a trend for polychromaticity to do so as well. Differently from other studies, we find that the object contour alone also improves memory color. Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. The results support the hypothesis that memory color representations are polychromatic and are synergistically linked with diagnostic shape representations.

  19. Technical Note: Guidelines for the digital computation of 2D and 3D enamel thickness in hominoid teeth.

    PubMed

    Benazzi, Stefano; Panetta, Daniele; Fornai, Cinzia; Toussaint, Michel; Gruppioni, Giorgio; Hublin, Jean-Jacques

    2014-02-01

    The study of enamel thickness has received considerable attention in regard to the taxonomic, phylogenetic and dietary assessment of human and non-human primates. Recent developments based on two-dimensional (2D) and three-dimensional (3D) digital techniques have facilitated accurate analyses, preserving the original object from invasive procedures. Various digital protocols have been proposed. These include several procedures based on manual handling of the virtual models and technical shortcomings, which prevent other scholars from confidently reproducing the entire digital protocol. There is a compelling need for standard, reproducible, and well-tailored protocols for the digital analysis of 2D and 3D dental enamel thickness. In this contribution we provide essential guidelines for the digital computation of 2D and 3D enamel thickness in hominoid molars, premolars, canines and incisors. We modify previous techniques suggested for 2D analysis and we develop a new approach for 3D analysis that can also be applied to premolars and anterior teeth. For each tooth class, the cervical line should be considered as the fundamental morphological feature both to isolate the crown from the root (for 3D analysis) and to define the direction of the cross-sections (for 2D analysis). Copyright © 2013 Wiley Periodicals, Inc.

  20. [Possibility of 3D Printing in Ophthalmology - First Experiences by Stereotactic Radiosurgery Planning Scheme of Intraocular Tumor].

    PubMed

    Furdová, A; Furdová, Ad; Thurzo, A; Šramka, M; Chorvát, M; Králik, G

    Nowadays 3D printing allows us to create physical objects on the basis of digital data. Thanks to its rapid development the use enormously increased in medicine too. Its creations facilitate surgical planning processes, education and research in context of organ transplantation, individualization prostheses, breast forms, and others.Our article describes the wide range of applied 3D printing technology possibilities in ophthalmology. It is focusing on innovative implementation of eye tumors treatment planning in stereotactic radiosurgery irradiation.We analyze our first experience with 3D printing model of the eye in intraocular tumor planning stereotactic radiosurgery. 3D printing, model, Fused Deposition Modelling, stereotactic radiosurgery, prostheses, intraocular tumor.

  1. Objects of attention, objects of perception.

    PubMed

    Avrahami, J

    1999-11-01

    Four experiments were conducted, to explore the notion of objects in perception. Taking as a starting point the effects of display content on rapid attention transfer and manipulating curvature, closure, and processing time, a link between objects of attention and objects of perception is proposed. In Experiment 1, a number of parallel, equally spaced, straight lines facilitated attention transfer along the lines, relative to transfer across the lines. In Experiment 2, with curved, closed-contour shapes, no "same-object" facilitation was observed. However, when a longer time interval was provided, in Experiment 3, a same-object advantage started to emerge. In Experiment 4, using the same curved shapes but in a non-speeded distance estimation task, a strong effect of objects was observed. It is argued that attention transfer is facilitated by line tracing but that line tracing is encouraged by objects.

  2. 3D Reasoning from Blocks to Stability.

    PubMed

    Zhaoyin Jia; Gallagher, Andrew C; Saxena, Ashutosh; Chen, Tsuhan

    2015-05-01

    Objects occupy physical space and obey physical laws. To truly understand a scene, we must reason about the space that objects in it occupy, and how each objects is supported stably by each other. In other words, we seek to understand which objects would, if moved, cause other objects to fall. This 3D volumetric reasoning is important for many scene understanding tasks, ranging from segmentation of objects to perception of a rich 3D, physically well-founded, interpretations of the scene. In this paper, we propose a new algorithm to parse a single RGB-D image with 3D block units while jointly reasoning about the segments, volumes, supporting relationships, and object stability. Our algorithm is based on the intuition that a good 3D representation of the scene is one that fits the depth data well, and is a stable, self-supporting arrangement of objects (i.e., one that does not topple). We design an energy function for representing the quality of the block representation based on these properties. Our algorithm fits 3D blocks to the depth values corresponding to image segments, and iteratively optimizes the energy function. Our proposed algorithm is the first to consider stability of objects in complex arrangements for reasoning about the underlying structure of the scene. Experimental results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.

  3. 3D printed pathological sectioning boxes to facilitate radiological-pathological correlation in hepatectomy cases.

    PubMed

    Trout, Andrew T; Batie, Matthew R; Gupta, Anita; Sheridan, Rachel M; Tiao, Gregory M; Towbin, Alexander J

    2017-11-01

    Radiogenomics promises to identify tumour imaging features indicative of genomic or proteomic aberrations that can be therapeutically targeted allowing precision personalised therapy. An accurate radiological-pathological correlation is critical to the process of radiogenomic characterisation of tumours. An accurate correlation, however, is difficult to achieve with current pathological sectioning techniques which result in sectioning in non-standard planes. The purpose of this work is to present a technique to standardise hepatic sectioning to facilitateradiological-pathological correlation. We describe a process in which three-dimensional (3D)-printed specimen boxes based on preoperative cross-sectional imaging (CT and MRI) can be used to facilitate pathological sectioning in standard planes immediately on hepatic resection enabling improved tumour mapping. We have applied this process in 13 patients undergoing hepatectomy and have observed close correlation between imaging and gross pathology in patients with both unifocal and multifocal tumours. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  5. 3D-shape of objects with straight line-motion by simultaneous projection of color coded patterns

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; Ayubi, Gaston A.; Di Martino, J. Matías; Castillo, Oscar E.; Ferrari, Jose A.

    2018-05-01

    In this work, we propose a novel technique to retrieve the 3D shape of dynamic objects by the simultaneous projection of a fringe pattern and a homogeneous light pattern which are both coded in two of the color channels of a RGB image. The fringe pattern, red channel, is used to retrieve the phase by phase-shift algorithms with arbitrary phase-step, while the homogeneous pattern, blue channel, is used to match pixels from the test object in consecutive images, which are acquired at different positions, and thus, to determine the speed of the object. The proposed method successfully overcomes the standard requirement of projecting fringes of two different frequencies; one frequency to extract object information and the other one to retrieve the phase. Validation experiments are presented.

  6. p3d--Python module for structural bioinformatics.

    PubMed

    Fufezan, Christian; Specht, Michael

    2009-08-21

    High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  7. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  8. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  9. 32 CFR 237a.3 - Objective and policy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Objective and policy. 237a.3 Section 237a.3...) MISCELLANEOUS PUBLIC AFFAIRS LIAISON WITH INDUSTRY § 237a.3 Objective and policy. (a) It is important that... subchapter, DoD components shall cooperate with industry at local and regional levels. However, they will...

  10. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  11. The NIH 3D Print Exchange: A Public Resource for Bioscientific and Biomedical 3D Prints.

    PubMed

    Coakley, Meghan F; Hurt, Darrell E; Weber, Nick; Mtingwa, Makazi; Fincher, Erin C; Alekseyev, Vsevelod; Chen, David T; Yun, Alvin; Gizaw, Metasebia; Swan, Jeremy; Yoo, Terry S; Huyen, Yentram

    2014-09-01

    The National Institutes of Health (NIH) has launched the NIH 3D Print Exchange, an online portal for discovering and creating bioscientifically relevant 3D models suitable for 3D printing, to provide both researchers and educators with a trusted source to discover accurate and informative models. There are a number of online resources for 3D prints, but there is a paucity of scientific models, and the expertise required to generate and validate such models remains a barrier. The NIH 3D Print Exchange fills this gap by providing novel, web-based tools that empower users with the ability to create ready-to-print 3D files from molecular structure data, microscopy image stacks, and computed tomography scan data. The NIH 3D Print Exchange facilitates open data sharing in a community-driven environment, and also includes various interactive features, as well as information and tutorials on 3D modeling software. As the first government-sponsored website dedicated to 3D printing, the NIH 3D Print Exchange is an important step forward to bringing 3D printing to the mainstream for scientific research and education.

  12. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  13. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  14. A low-cost microwell device for high-resolution imaging of neurite outgrowth in 3D

    NASA Astrophysics Data System (ADS)

    Ren, Yuan; Mlodzianoski, Michael J.; Cheun Lee, Aih; Huang, Fang; Suter, Daniel M.

    2018-06-01

    Objective. Current neuronal cell culture is mostly performed on two-dimensional (2D) surfaces, which lack many of the important features of the native environment of neurons, including topographical cues, deformable extracellular matrix, and spatial isotropy or anisotropy in three dimensions. Although three-dimensional (3D) cell culture systems provide a more physiologically relevant environment than 2D systems, their popularity is greatly hampered by the lack of easy-to-make-and-use devices. We aim to develop a widely applicable 3D culture procedure to facilitate the transition of neuronal cultures from 2D to 3D. Approach. We made a simple microwell device for 3D neuronal cell culture that is inexpensive, easy to assemble, and fully compatible with commonly used imaging techniques, including super-resolution microscopy. Main results. We developed a novel gel mixture to support 3D neurite regeneration of Aplysia bag cell neurons, a system that has been extensively used for quantitative analysis of growth cone dynamics in 2D. We found that the morphology and growth pattern of bag cell growth cones in 3D culture closely resemble the ones of growth cones observed in vivo. We demonstrated the capability of our device for high-resolution imaging of cytoskeletal and signaling proteins as well as organelles. Significance. Neuronal cell culture has been a valuable tool for neuroscientists to study the behavior of neurons in a controlled environment. Compared to 2D, neurons cultured in 3D retain the majority of their native characteristics, while offering higher accessibility, control, and repeatability. We expect that our microwell device will facilitate a wider adoption of 3D neuronal cultures to study the mechanisms of neurite regeneration.

  15. Facilitating Identification of Poorly Preserved Marine Microfossils through 3D Printing

    NASA Astrophysics Data System (ADS)

    Christensen, R. V.; Robinson, M. M.; Sessa, J.

    2016-12-01

    The Paleocene-Eocene Thermal Maximum (PETM) was a period of sudden and intense global warming that occurred 56 Myr, and is widely considered a possible analogue for future climatic changes. Marine microfossils are important proxies used in the reconstruction of PETM paleoenvironments and paleoclimate. The correct species-level identification of foraminifera and pteropod specimens is necessary to understand ocean temperature, chemistry, nutrient availability, and ecosystem structure during this hyperthermal event. During periods of extreme or rapid environmental perturbations foraminifera can be poorly preserved. Pteropod identification is equally challenging as aragonitic shells are vulnerable to changing ocean acidity and often only internal molds are left to be identified. The macroscopic rendering of the internal and external test morphology of marine microfossils via 3D printing allows for a more experiential species-recognition education, especially of difficult to identify specimens. A selected microfossil specimen is scanned using computerized tomography (CT), creating x-ray slices of the specimen that are then processed into a digital model. The digitized fossil can then be analyzed using 3D software and subsequently printed using a wide variety of materials. The magnified model can be easily manipulated in a student's hand, and thus can be studied in a more visible and tactile way than traditional methods allow. This invaluable teaching tool physically manifests what was previously limited to textbook images and illustrations or the view field of a microscope. We show the step-by-step 3-D printing process of several PETM marine microfossil specimens from CT scans and demonstrate their advantage over 2-D SEM images for learning to identify microfossils to the species level. In addition, we provide samples to demonstrate the utility of 3-D models in identifying poorly preserved foraminifer specimens and species of pteropods from internal molds.

  16. Acquisition and Neural Network Prediction of 3D Deformable Object Shape Using a Kinect and a Force-Torque Sensor.

    PubMed

    Tawbe, Bilal; Cretu, Ana-Maria

    2017-05-11

    The realistic representation of deformations is still an active area of research, especially for deformable objects whose behavior cannot be simply described in terms of elasticity parameters. This paper proposes a data-driven neural-network-based approach for capturing implicitly and predicting the deformations of an object subject to external forces. Visual data, in the form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach based on neural gas fitting is proposed to describe the particularities of a deformation over the selectively simplified 3D surface of the object, without requiring knowledge of the object material. An alignment procedure, a distance-based clustering, and inspiration from stratified sampling support this process. The resulting representation is denser in the region of the deformation (an average of 96.6% perceptual similarity with the collected data in the deformed area), while still preserving the object's overall shape (86% similarity over the entire surface) and only using on average of 40% of the number of vertices in the mesh. A series of feedforward neural networks is then trained to predict the mapping between the force parameters characterizing the interaction with the object and the change in the object shape, as captured by the fitted neural gas nodes. This series of networks allows for the prediction of the deformation of an object when subject to unknown interactions.

  17. Physical security and cyber security issues and human error prevention for 3D printed objects: detecting the use of an incorrect printing material

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-06-01

    A wide variety of characteristics of 3D printed objects have been linked to impaired structural integrity and use-efficacy. The printing material can also have a significant impact on the quality, utility and safety characteristics of a 3D printed object. Material issues can be created by vendor issues, physical security issues and human error. This paper presents and evaluates a system that can be used to detect incorrect material use in a 3D printer, using visible light imaging. Specifically, it assesses the ability to ascertain the difference between materials of different color and different types of material with similar coloration.

  18. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    DOE PAGES

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; ...

    2016-01-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less

  19. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    PubMed Central

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  20. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  1. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  2. RAG-3D: A search tool for RNA 3D substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  3. 3D Filament Network Segmentation with Multiple Active Contours

    NASA Astrophysics Data System (ADS)

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2014-03-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and microtubules. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we developed a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D TIRF Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy.

  4. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    NASA Astrophysics Data System (ADS)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  5. Conscious intention to speak proactively facilitates lexical access during overt object naming

    PubMed Central

    Strijkers, Kristof; Holcomb, Phillip J.; Costa, Albert

    2013-01-01

    The present study explored when and how the top-down intention to speak influences the language production process. We did so by comparing the brain’s electrical response for a variable known to affect lexical access, namely word frequency, during overt object naming and non-verbal object categorization. We found that during naming, the event-related brain potentials elicited for objects with low frequency names started to diverge from those with high frequency names as early as 152 ms after stimulus onset, while during non-verbal categorization the same frequency comparison appeared 200 ms later eliciting a qualitatively different brain response. Thus, only when participants had the conscious intention to name an object the brain rapidly engaged in lexical access. The data offer evidence that top-down intention to speak proactively facilitates the activation of words related to perceived objects. PMID:24039339

  6. Tissue and Organ 3D Bioprinting.

    PubMed

    Xia, Zengmin; Jin, Sha; Ye, Kaiming

    2018-02-01

    Three-dimensional (3D) bioprinting enables the creation of tissue constructs with heterogeneous compositions and complex architectures. It was initially used for preparing scaffolds for bone tissue engineering. It has recently been adopted to create living tissues, such as cartilage, skin, and heart valve. To facilitate vascularization, hollow channels have been created in the hydrogels by 3D bioprinting. This review discusses the state of the art of the technology, along with a broad range of biomaterials used for 3D bioprinting. It provides an update on recent developments in bioprinting and its applications. 3D bioprinting has profound impacts on biomedical research and industry. It offers a new way to industrialize tissue biofabrication. It has great potential for regenerating tissues and organs to overcome the shortage of organ transplantation.

  7. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  9. Photochemical Copper Coating on 3D Printed Thermoplastics

    NASA Astrophysics Data System (ADS)

    Yung, Winco K. C.; Sun, Bo; Huang, Junfeng; Jin, Yingdi; Meng, Zhengong; Choy, Hang Shan; Cai, Zhixiang; Li, Guijun; Ho, Cheuk Lam; Yang, Jinlong; Wong, Wai Yeung

    2016-08-01

    3D printing using thermoplastics has become very popular in recent years, however, it is challenging to provide a metal coating on 3D objects without using specialized and expensive tools. Herein, a novel acrylic paint containing malachite for coating on 3D printed objects is introduced, which can be transformed to copper via one-step laser treatment. The malachite containing pigment can be used as a commercial acrylic paint, which can be brushed onto 3D printed objects. The material properties and photochemical transformation processes have been comprehensively studied. The underlying physics of the photochemical synthesis of copper was characterized using density functional theory calculations. After laser treatment, the surface coating of the 3D printed objects was transformed to copper, which was experimentally characterized by XRD. 3D printed prototypes, including model of the Statue of Liberty covered with a copper surface coating and a robotic hand with copper interconnections, are demonstrated using this painting method. This composite material can provide a novel solution for coating metals on 3D printed objects. The photochemical reduction analysis indicates that the copper rust in malachite form can be remotely and photo-chemically reduced to pure copper with sufficient photon energy.

  10. Photochemical Copper Coating on 3D Printed Thermoplastics

    PubMed Central

    Yung, Winco K. C.; Sun, Bo; Huang, Junfeng; Jin, Yingdi; Meng, Zhengong; Choy, Hang Shan; Cai, Zhixiang; Li, Guijun; Ho, Cheuk Lam; Yang, Jinlong; Wong, Wai Yeung

    2016-01-01

    3D printing using thermoplastics has become very popular in recent years, however, it is challenging to provide a metal coating on 3D objects without using specialized and expensive tools. Herein, a novel acrylic paint containing malachite for coating on 3D printed objects is introduced, which can be transformed to copper via one-step laser treatment. The malachite containing pigment can be used as a commercial acrylic paint, which can be brushed onto 3D printed objects. The material properties and photochemical transformation processes have been comprehensively studied. The underlying physics of the photochemical synthesis of copper was characterized using density functional theory calculations. After laser treatment, the surface coating of the 3D printed objects was transformed to copper, which was experimentally characterized by XRD. 3D printed prototypes, including model of the Statue of Liberty covered with a copper surface coating and a robotic hand with copper interconnections, are demonstrated using this painting method. This composite material can provide a novel solution for coating metals on 3D printed objects. The photochemical reduction analysis indicates that the copper rust in malachite form can be remotely and photo-chemically reduced to pure copper with sufficient photon energy. PMID:27501761

  11. Photochemical Copper Coating on 3D Printed Thermoplastics.

    PubMed

    Yung, Winco K C; Sun, Bo; Huang, Junfeng; Jin, Yingdi; Meng, Zhengong; Choy, Hang Shan; Cai, Zhixiang; Li, Guijun; Ho, Cheuk Lam; Yang, Jinlong; Wong, Wai Yeung

    2016-08-09

    3D printing using thermoplastics has become very popular in recent years, however, it is challenging to provide a metal coating on 3D objects without using specialized and expensive tools. Herein, a novel acrylic paint containing malachite for coating on 3D printed objects is introduced, which can be transformed to copper via one-step laser treatment. The malachite containing pigment can be used as a commercial acrylic paint, which can be brushed onto 3D printed objects. The material properties and photochemical transformation processes have been comprehensively studied. The underlying physics of the photochemical synthesis of copper was characterized using density functional theory calculations. After laser treatment, the surface coating of the 3D printed objects was transformed to copper, which was experimentally characterized by XRD. 3D printed prototypes, including model of the Statue of Liberty covered with a copper surface coating and a robotic hand with copper interconnections, are demonstrated using this painting method. This composite material can provide a novel solution for coating metals on 3D printed objects. The photochemical reduction analysis indicates that the copper rust in malachite form can be remotely and photo-chemically reduced to pure copper with sufficient photon energy.

  12. Object-based connectedness facilitates matching.

    PubMed

    Koning, Arno; van Lier, Rob

    2003-10-01

    In two matching tasks, participants had to match two images of object pairs. Image-based (IB) connectedness refers to connectedness between the objects in an image. Object-based (OB) connectedness refers to connectedness between the interpreted objects. In Experiment 1, a monocular depth cue (shadow) was used to distinguish different relation types between object pairs. Three relation types were created: IB/OB-connected objects, IB/OB-disconnected objects, and IB-connected/OB-disconnected objects. It was found that IB/OB-connected objects were matched faster than IB/OB-disconnected objects. Objects that were IB-connected/OB-disconnected were matched equally to IB/OB-disconnected objects. In Experiment 2, stereoscopic presentation was used. With relation types comparable to those in Experiment 1, it was again found that OB connectedness determined speed of matching, rather than IB connectedness. We conclude that matching of projections of three-dimensional objects depends more on OB connectedness than on IB connectedness.

  13. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0

  14. 3D Data Acquisition Platform for Human Activity Understanding

    DTIC Science & Technology

    2016-03-02

    3D data. The support for the acquisition of such research instrumentation have significantly facilitated our current and future research and educate ...SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and

  15. Alignment of multimodality, 2D and 3D breast images

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.

    2003-05-01

    In a larger effort, we are studying methods to improve the specificity of the diagnosis of breast cancer by combining the complementary information available from multiple imaging modalities. Merging information is important for a number of reasons. For example, contrast uptake curves are an indication of malignancy. The determination of anatomical locations in corresponding images from various modalities is necessary to ascertain the extent of regions of tissue. To facilitate this fusion, registration becomes necessary. We describe in this paper a framework in which 2D and 3D breast images from MRI, PET, Ultrasound, and Digital Mammography can be registered to facilitate this goal. Briefly, prior to image acquisition, an alignment grid is drawn on the breast skin. Modality-specific markers are then placed at the indicated grid points. Images are then acquired by a specific modality with the modality specific external markers in place causing the markers to appear in the images. This is the first study that we are aware of that has undertaken the difficult task of registering 2D and 3D images of such a highly deformable (the breast) across such a wide variety of modalities. This paper reports some very preliminary results from this project.

  16. SERT D spacecraft study. [project planning and objectives

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The SERT D (Space Electric Rocket Test - D) study defines a possible spacecraft project that would demonstrate the use of electric ion thrusters for long-term (5 yr) station keeping and attitude control of a synchronous orbit satellite. Other mission objectives included in the study were: station walking to satellite rendezvous and inspection, use of low cost attitude sensing system, use of an advanced solar array orientation and slip ring system, and an ion thruster integrated directly with a solar array power source. The SERT D spacecraft, if launched, will become SERT 3 the third space electric thruster test.

  17. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  18. Wearable 3D measurement

    NASA Astrophysics Data System (ADS)

    Manabe, Yoshitsugu; Imura, Masataka; Tsuchiya, Masanobu; Yasumuro, Yoshihiro; Chihara, Kunihiro

    2003-01-01

    Wearable 3D measurement realizes to acquire 3D information of an objects or an environment using a wearable computer. Recently, we can send voice and sound as well as pictures by mobile phone in Japan. Moreover it will become easy to capture and send data of short movie by it. On the other hand, the computers become compact and high performance. And it can easy connect to Internet by wireless LAN. Near future, we can use the wearable computer always and everywhere. So we will be able to send the three-dimensional data that is measured by wearable computer as a next new data. This paper proposes the measurement method and system of three-dimensional data of an object with the using of wearable computer. This method uses slit light projection for 3D measurement and user"s motion instead of scanning system.

  19. Intracellular ROS mediates gas plasma-facilitated cellular transfection in 2D and 3D cultures

    PubMed Central

    Xu, Dehui; Wang, Biqing; Xu, Yujing; Chen, Zeyu; Cui, Qinjie; Yang, Yanjie; Chen, Hailan; Kong, Michael G.

    2016-01-01

    This study reports the potential of cold atmospheric plasma (CAP) as a versatile tool for delivering oligonucleotides into mammalian cells. Compared to lipofection and electroporation methods, plasma transfection showed a better uptake efficiency and less cell death in the transfection of oligonucleotides. We demonstrated that the level of extracellular aqueous reactive oxygen species (ROS) produced by gas plasma is correlated with the uptake efficiency and that this is achieved through an increase of intracellular ROS levels and the resulting increase in cell membrane permeability. This finding was supported by the use of ROS scavengers, which reduced CAP-based uptake efficiency. In addition, we found that cold atmospheric plasma could transfer oligonucleotides such as siRNA and miRNA into cells even in 3D cultures, thus suggesting the potential for unique applications of CAP beyond those provided by standard transfection techniques. Together, our results suggest that cold plasma might provide an efficient technique for the delivery of siRNA and miRNA in 2D and 3D culture models. PMID:27296089

  20. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  1. 3D Pit Stop Printing

    ERIC Educational Resources Information Center

    Wright, Lael; Shaw, Daniel; Gaidds, Kimberly; Lyman, Gregory; Sorey, Timothy

    2018-01-01

    Although solving an engineering design project problem with limited resources or structural capabilities of materials can be part of the challenge, students making their own parts can support creativity. The authors of this article found an exciting solution: 3D printers are not only one of several tools for making but also facilitate a creative…

  2. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  3. Compression of self-assembled nano-objects: 2D/3D transitions in films of (perfluoroalkyl)alkanes--persistence of an organized array of surface micelles.

    PubMed

    de Gracia Lux, Caroline; Gallani, Jean-Louis; Waton, Gilles; Krafft, Marie Pierre

    2010-06-25

    Understanding and controlling the molecular organization of amphiphilic molecules at interfaces is essential for materials and biological sciences. When spread on water, the model amphiphiles constituted by C(n)F(2n+1)C(m)H(2m+1) (FnHm) diblocks spontaneously self-assemble into surface hemimicelles. Therefore, compression of monolayers of FnHm diblocks is actually a compression of nanometric objects. Langmuir films of F8H16, F8H18, F8H20, and F10H16 can actually be compressed far beyond the "collapse" of their monolayers at approximately 30 A(2). For molecular areas A between 30 and 10 A(2), a partially reversible, 2D/3D transition occurs between a monolayer of surface micelles and a multilayer that coexist on a large plateau. For A<10 A(2), surface pressure increases again, reaching up to approximately 48 mN m(-1) before the film eventually collapses. Brewster angle microscopy and AFM indicate a several-fold increase in film thickness when scanning through the 2D/3D coexistence plateau. Compression beyond the plateau leads to a further increase in film thickness and, eventually, to film disruption. Reversibility was assessed by using compression-expansion cycles. AFM of F8H20 films shows that the initial monolayer of micelles is progressively covered by one (and eventually two) bilayers, which leads to a hitherto unknown organized composite arrangement. Compression of films of the more rigid F10H16 results in crystalline-like inflorescences. For both diblocks, a hexagonal array of surface micelles is consistently seen, even when the 3D structures eventually disrupt, which means that this monolayer persists throughout the compression experiments. Two examples of pressure-driven transformations of films of self-assembled objects are thus provided. These observations further illustrate the powerful self-assembling capacity of perfluoroalkyl chains.

  4. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    NASA Astrophysics Data System (ADS)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  5. Simulating 3D deformation using connected polygons

    NASA Astrophysics Data System (ADS)

    Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.

    2018-03-01

    In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.

  6. Progressive 3D shape abstraction via hierarchical CSG tree

    NASA Astrophysics Data System (ADS)

    Chen, Xingyou; Tang, Jin; Li, Chenglong

    2017-06-01

    A constructive solid geometry(CSG) tree model is proposed to progressively abstract 3D geometric shape of general object from 2D image. Unlike conventional ones, our method applies to general object without the need for massive CAD models, and represents the object shapes in a coarse-to-fine manner that allows users to view temporal shape representations at any time. It stands in a transitional position between 2D image feature and CAD model, benefits from state-of-the-art object detection approaches and better initializes CAD model for finer fitting, estimates 3D shape and pose parameters of object at different levels according to visual perception objective, in a coarse-to-fine manner. Two main contributions are the application of CSG building up procedure into visual perception, and the ability of extending object estimation result into a more flexible and expressive model than 2D/3D primitive shapes. Experimental results demonstrate the feasibility and effectiveness of the proposed approach.

  7. 3D printing of an aortic aneurysm to facilitate decision making and device selection for endovascular aneurysm repair in complex neck anatomy.

    PubMed

    Tam, Matthew D B S; Laycock, Stephen D; Brown, James R I; Jakeways, Matthew

    2013-12-01

    To describe rapid prototyping or 3-dimensional (3D) printing of aneurysms with complex neck anatomy to facilitate endovascular aneurysm repair (EVAR). A 75-year-old man had a 6.6-cm infrarenal aortic aneurysm that appeared on computed tomographic angiography to have a sharp neck angulation of ~90°. However, although the computed tomography (CT) data were analyzed using centerline of flow, the true neck length and relations of the ostial origins were difficult to determine. No multidisciplinary consensus could be reached as to which stent-graft to use owing to these borderline features of the neck anatomy. Based on past experience with rapid prototyping technology, a decision was taken to print a model of the aneurysm to aid in visualization of the neck anatomy. The CT data were segmented, processed, and converted into a stereolithographic format representing the lumen as a 3D volume, from which a full-sized replica was printed within 24 hours. The model demonstrated that the neck was adequate for stent-graft repair using the Aorfix device. Rapid prototyping of aortic aneurysms is feasible and can aid decision making and device delivery. Further work is required to test the value of 3D replicas in planning procedures and their impact on procedure time, radiation dose, and procedure cost.

  8. Geological mapping goes 3-D in response to societal needs

    USGS Publications Warehouse

    Thorleifson, H.; Berg, R.C.; Russell, H.A.J.

    2010-01-01

    The transition to 3-D mapping has been made possible by technological advances in digital cartography, GIS, data storage, analysis, and visualization. Despite various challenges, technological advancements facilitated a gradual transition from 2-D maps to 2.5-D draped maps to 3-D geological mapping, supported by digital spatial and relational databases that can be interrogated horizontally or vertically and viewed interactively. Challenges associated with data collection, human resources, and information management are daunting due to their resource and training requirements. The exchange of strategies at the workshops has highlighted the use of basin analysis to develop a process-based predictive knowledge framework that facilitates data integration. Three-dimensional geological information meets a public demand that fills in the blanks left by conventional 2-D mapping. Two-dimensional mapping will, however, remain the standard method for extensive areas of complex geology, particularly where deformed igneous and metamorphic rocks defy attempts at 3-D depiction.

  9. Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

    ERIC Educational Resources Information Center

    Wang, Yushun; Zhuang, Yueting

    2008-01-01

    Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…

  10. A 3D interactive multi-object segmentation tool using local robust statistics driven active contours.

    PubMed

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-08-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: first, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction-this not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide

  11. A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours

    PubMed Central

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-01-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: First, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction — This not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we

  12. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  13. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  14. Powering an in-space 3D printer using solar light energy

    NASA Astrophysics Data System (ADS)

    Leake, Skye; McGuire, Thomas; Parsons, Michael; Hirsch, Michael P.; Straub, Jeremy

    2016-05-01

    This paper describes how a solar power source can enable in-space 3D printing without requiring conversion to electric power and back. A design for an in-space 3D printer is presented, with a particular focus on the power generation system. Then, key benefits are presented and evaluated. Specifically, the approach facilitates the design of a spacecraft that can be built, launched, and operated at very low cost levels. The proposed approach also facilitates easy configuration of the amount of energy that is supplied. Finally, it facilitates easier disposal by removing the heavy metals and radioactive materials required for a nuclear-power solution.

  15. Proof-of-concept: 3D bioprinting of pigmented human skin constructs.

    PubMed

    Ng, Wei Long; Qi, Jovina Tan Zhi; Yeong, Wai Yee; Naing, May Win

    2018-01-23

    Three-dimensional (3D) pigmented human skin constructs have been fabricated using a 3D bioprinting approach. The 3D pigmented human skin constructs are obtained from using three different types of skin cells (keratinocytes, melanocytes and fibroblasts from three different skin donors) and they exhibit similar constitutive pigmentation (pale pigmentation) as the skin donors. A two-step drop-on-demand bioprinting strategy facilitates the deposition of cell droplets to emulate the epidermal melanin units (pre-defined patterning of keratinocytes and melanocytes at the desired positions) and manipulation of the microenvironment to fabricate 3D biomimetic hierarchical porous structures found in native skin tissue. The 3D bioprinted pigmented skin constructs are compared to the pigmented skin constructs fabricated by conventional a manual-casting approach; in-depth characterization of both the 3D pigmented skin constructs has indicated that the 3D bioprinted skin constructs have a higher degree of resemblance to native skin tissue in term of the presence of well-developed stratified epidermal layers and the presence of a continuous layer of basement membrane proteins as compared to the manually-cast samples. The 3D bioprinting approach facilitates the development of 3D in vitro pigmented human skin constructs for potential toxicology testing and fundamental cell biology research.

  16. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  17. 3D reconstruction of microminiature objects based on contour line

    NASA Astrophysics Data System (ADS)

    Li, Cailin; Wang, Qiang; Guo, Baoyun

    2009-10-01

    A new 3D automatic reconstruction method of micro solid of revolution is presented in this paper. In the implementation procedure of this method, image sequence of the solid of revolution of 360° is obtained, which rotation speed is controlled by motor precisely, in the rotate photographic mode of back light. Firstly, we need calibrate the height of turntable, the size of pixel and rotation axis of turntable. Then according to the calibration result of rotation axis, the height of turntable, rotation angle and the pixel size, the contour points of each image can be transformed into 3D points in the reference coordinate system to generate the point cloud model. Finally, the surface geometrical model of solid of revolution is obtained by using the relationship of two adjacent contours. Experimental results on real images are presented, which demonstrate the effectiveness of the Approach.

  18. Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation.

    PubMed

    Norman, J Farley; Phillips, Flip; Cheeseman, Jacob R; Thomason, Kelsey E; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B; Lamirande, Davora

    2016-01-01

    It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped "glaven") for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object's shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions-e.g., the participants' performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.

  19. The 3D Reference Earth Model (REM-3D): Update and Outlook

    NASA Astrophysics Data System (ADS)

    Lekic, V.; Moulik, P.; Romanowicz, B. A.; Dziewonski, A. M.

    2016-12-01

    Elastic properties of the Earth's interior (e.g. density, rigidity, compressibility, anisotropy) vary spatially due to changes in temperature, pressure, composition, and flow. In the 20th century, seismologists have constructed reference models of how these quantities vary with depth, notably the PREM model of Dziewonski and Anderson (1981). These 1D reference earth models have proven indispensable in earthquake location, imaging of interior structure, understanding material properties under extreme conditions, and as a reference in other fields, such as particle physics and astronomy. Over the past three decades, more sophisticated efforts by seismologists have yielded several generations of models of how properties vary not only with depth, but also laterally. Yet, though these three-dimensional (3D) models exhibit compelling similarities at large scales, differences in the methodology, representation of structure, and dataset upon which they are based, have prevented the creation of 3D community reference models. We propose to overcome these challenges by compiling, reconciling, and distributing a long period (>15 s) reference seismic dataset, from which we will construct a 3D seismic reference model (REM-3D) for the Earth's mantle, which will come in two flavors: a long wavelength smoothly parameterized model and a set of regional profiles. Here, we summarize progress made in the construction of the reference long period dataset, and present preliminary versions of the REM-3D in order to illustrate the two flavors of REM-3D and their relative advantages and disadvantages. As a community reference model and with fully quantified uncertainties and tradeoffs, REM-3D will facilitate Earth imaging studies, earthquake characterization, inferences on temperature and composition in the deep interior, and be of improved utility to emerging scientific endeavors, such as neutrino geoscience. In this presentation, we outline the outlook for setting up advisory community

  20. Learning object correspondences with the observed transport shape measure.

    PubMed

    Pitiot, Alain; Delingette, Hervé; Toga, Arthur W; Thompson, Paul M

    2003-07-01

    We propose a learning method which introduces explicit knowledge to the object correspondence problem. Our approach uses an a priori learning set to compute a dense correspondence field between two objects, where the characteristics of the field bear close resemblance to those in the learning set. We introduce a new local shape measure we call the "observed transport measure", whose properties make it particularly amenable to the matching problem. From the values of our measure obtained at every point of the objects to be matched, we compute a distance matrix which embeds the correspondence problem in a highly expressive and redundant construct and facilitates its manipulation. We present two learning strategies that rely on the distance matrix and discuss their applications to the matching of a variety of 1-D, 2-D and 3-D objects, including the corpus callosum and ventricular surfaces.

  1. Drivers of Dashboard Development (3-D): A Curricular Continuous Quality Improvement Approach.

    PubMed

    Shroyer, A Laurie; Lu, Wei-Hsin; Chandran, Latha

    2016-04-01

    Undergraduate medical education (UME) programs are seeking systematic ways to monitor and manage their educational performance metrics and document their achievement of external goals (e.g., Liaison Committee on Medical Education [LCME] accreditation requirements) and internal objectives (institution-specific metrics). In other continuous quality improvement (CQI) settings, summary dashboard reports have been used to evaluate and improve performance. The Stony Brook University School of Medicine UME leadership team developed and implemented summary dashboard performance reports in 2009 to document LCME standards/criteria compliance, evaluate medical student performance, and identify progress in attaining institutional curricular goals and objectives. Key performance indicators (KPIs) and benchmarks were established and have been routinely monitored as part of the novel Drivers of Dashboard Development (3-D) approach to curricular CQI. The systematic 3-D approach has had positive CQI impacts. Substantial improvements over time have been documented in KPIs including timeliness of clerkship grades, midclerkship feedback, student mistreatment policy awareness, and student satisfaction. Stakeholder feedback indicates that the dashboards have provided useful information guiding data-driven curricular changes, such as integrating clinician-scientists as lecturers in basic science courses to clarify the clinical relevance of specific topics. Gaining stakeholder acceptance of the 3-D approach required clear communication of preestablished targets and annual meetings with department leaders and course/clerkship directors. The 3-D approach may be considered by UME programs as a template for providing faculty and leadership with a CQI framework to establish shared goals, document compliance, report accomplishments, enrich communications, facilitate decisions, and improve performance.

  2. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    NASA Astrophysics Data System (ADS)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  3. Physical modeling of 3D and 4D laser imaging

    NASA Astrophysics Data System (ADS)

    Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard

    2010-04-01

    Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.

  4. Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

    PubMed Central

    Cheeseman, Jacob R.; Thomason, Kelsey E.; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B.; Lamirande, Davora

    2016-01-01

    It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision. PMID:26863531

  5. Impact of 3D vision on mental workload and laparoscopic performance in inexperienced subjects.

    PubMed

    Gómez-Gómez, E; Carrasco-Valiente, J; Valero-Rosa, J; Campos-Hernández, J P; Anglada-Curado, F J; Carazo-Carazo, J L; Font-Ugalde, P; Requena-Tapia, M J

    2015-05-01

    To assess the effect of vision in three dimensions (3D) versus two dimensions (2D) on mental workload and laparoscopic performance during simulation-based training. A prospective, randomized crossover study on inexperienced students in operative laparoscopy was conducted. Forty-six candidates executed five standardized exercises on a pelvitrainer with both vision systems (3D and 2D). Laparoscopy performance was assessed using the total time (in seconds) and the number of failed attempts. For workload assessment, the validated NASA-TLX questionnaire was administered. 3D vision improves the performance reducing the time (3D = 1006.08 ± 315.94 vs. 2D = 1309.17 ± 300.28; P < .001) and the total number of failed attempts (3D = .84 ± 1.26 vs. 2D = 1.86 ± 1.60; P < .001). For each exercise, 3D vision also shows better performance times: "transfer objects" (P = .001), "single knot" (P < .001), "clip and cut" (P < .05), and "needle guidance" (P < .001). Besides, according to the NASA-TLX results, less mental workload is experienced with the use of 3D (P < .001). However, 3D vision was associated with greater visual impairment (P < .01) and headaches (P < .05). The incorporation of 3D systems in laparoscopic training programs would facilitate the acquisition of laparoscopic skills, because they reduce mental workload and improve the performance on inexperienced surgeons. However, some undesirable effects such as visual discomfort or headache are identified initially. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Word comprehension facilitates object individuation in 10- and 11-month-old infants.

    PubMed

    Rivera, Susan M; Zawaydeh, Aseen Nancie

    2007-05-18

    The present study investigated the role that comprehending words for objects plays in 10- and 11-month-old infants' ability to individuate those objects in a spatiotemporally ambiguous event. To do this, we employed an object individuation task in which infants were familiarized to two objects coming in and out from behind a screen in alternation, and then the screen was removed to reveal either both or only one of the objects. Results show that only when 10- and 11-month-olds comprehend words for both objects seen do they exhibit looking behavior that is consistent with object individuation (i.e., looking longer when one of the objects is surreptitiously removed). Neither level of object permanence reasoning nor overall receptive vocabulary had an effect on performance in the object individuation task, indicating that the effect was specific to the immediate parameters of the situation, and not a function of overall precocity on the part of the succeeding infants. These results suggest that comprehending the words for occluded/disoccluded objects provides a kind of "glue" which allows infants to bind the mental index of an object with its perceptual features (thus precipitating the formation of two mental indexes, rather than one). They further suggest that a shift from object indexing driven by the where (dorsal) system to one which is driven by integration of the ventral and dorsal neural systems, usually not observed until 12 months of age, can be facilitated by word comprehension in 10- and 11-month-old infants.

  7. 3D Laser Scanner for Underwater Manipulation.

    PubMed

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  8. Responsive 3D microstructures from virus building blocks.

    PubMed

    Oh, Seungwhan; Kwak, Eun-A; Jeon, Seongho; Ahn, Suji; Kim, Jong-Man; Jaworski, Justyn

    2014-08-13

    Fabrication of 3D biological structures reveals dynamic response to external stimuli. A liquid-crystalline bridge extrusion technique is used to generate 3D structures allowing the capture of Rayleigh-like instabilities, facilitating customization of smooth, helical, or undulating periodic surface textures. By integrating intrinsic biochemical functionality and synthetic components into controlled structures, this strategy offers a new form of adaptable materials. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  10. Surfactant Protein D Facilitates Cryptococcus neoformans Infection

    PubMed Central

    Geunes-Boyer, Scarlett; Beers, Michael F.; Heitman, Joseph; Wright, Jo Rae

    2012-01-01

    Concurrent with the global escalation of the AIDS pandemic, cryptococcal infections are increasing and are of significant medical importance. Furthermore, Cryptococcus neoformans has become a primary human pathogen, causing infection in seemingly healthy individuals. Although numerous studies have elucidated the virulence properties of C. neoformans, less is understood regarding lung host immune factors during early stages of fungal infection. Based on our previous studies documenting that pulmonary surfactant protein D (SP-D) protects C. neoformans cells against macrophage-mediated defense mechanisms in vitro (S. Geunes-Boyer et al., Infect. Immun. 77:2783–2794, 2009), we postulated that SP-D would facilitate fungal infection in vivo. To test this hypothesis, we examined the role of SP-D in response to C. neoformans using SP-D−/− mice. Here, we demonstrate that mice lacking SP-D were partially protected during C. neoformans infection; they displayed a longer mean time to death and decreased fungal burden at several time points postinfection than wild-type mice. This effect was reversed by the administration of exogenous SP-D. Furthermore, we show that SP-D bound to the surface of the yeast cells and protected the pathogenic microbes against macrophage-mediated defense mechanisms and hydrogen peroxide (H2O2)-induced oxidative stress in vitro and in vivo. These findings indicate that C. neoformans is capable of coopting host SP-D to increase host susceptibility to the yeast. This study establishes a new paradigm for the role played by SP-D during host responses to C. neoformans and consequently imparts insight into potential future preventive and/or treatment strategies for cryptococcosis. PMID:22547543

  11. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  12. 3D-printed Bioanalytical Devices

    PubMed Central

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-01-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  13. A combined system for 3D printing cybersecurity

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-06-01

    Previous work has discussed the impact of cybersecurity breaches on 3D printed objects. Multiple attack types that could weaken objects, make them unsuitable for certain applications and even create safety hazards have been presented. This paper considers a visible light sensing-based verification system's efficacy as a means of thwarting cybersecurity threats to 3D printing. This system detects discrepancies between expected and actual printed objects (based on an independent pristine CAD model). Whether reliance on an independent CAD model is appropriate is also considered. The future of 3D printing is projected and the importance of cybersecurity in this future is discussed.

  14. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  15. An Approach to Develop 3d Geo-Dbms Topological Operators by Re-Using Existing 2d Operators

    NASA Astrophysics Data System (ADS)

    Xu, D.; Zlatanova, S.

    2013-09-01

    Database systems are continuously extending their capabilities to store, process and analyse 3D data. Topological relationships which describe the interaction of objects in space is one of the important spatial issues. However, spatial operators for 3D objects are still insufficient. In this paper we present the development of a new 3D topological function to distinguish intersections of 3D planar polygons. The development uses existing 2D functions in the DBMS and two geometric transformations (rotation and projection). This function is tested for a real dataset to detect overlapping 3D city objects. The paper presents the algorithms and analyses the challenges. Suggestions for improvements of the current algorithm as well as possible extensions to handle more 3D topological cases are discussed at the end.

  16. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  17. 3D Printing of Carbon Nanotubes-Based Microsupercapacitors.

    PubMed

    Yu, Wei; Zhou, Han; Li, Ben Q; Ding, Shujiang

    2017-02-08

    A novel 3D printing procedure is presented for fabricating carbon-nanotubes (CNTs)-based microsupercapacitors. The 3D printer uses a CNTs ink slurry with a moderate solid content and prints a stream of continuous droplets. Appropriate control of a heated base is applied to facilitate the solvent removal and adhesion between printed layers and to improve the structure integrity without structure delamination or distortion upon drying. The 3D-printed electrodes for microsupercapacitors are characterized by SEM, laser scanning confocal microscope, and step profiler. Effect of process parameters on 3D printing is also studied. The final solid-state microsupercapacitors are assembled with the printed multilayer CNTs structures and poly(vinyl alcohol)-H 3 PO 4 gel as the interdigitated microelectrodes and electrolyte. The electrochemical performance of 3D printed microsupercapacitors is also tested, showing a significant areal capacitance and excellent cycle stability.

  18. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  19. A 3D Geometry Model Search Engine to Support Learning

    ERIC Educational Resources Information Center

    Tam, Gary K. L.; Lau, Rynson W. H.; Zhao, Jianmin

    2009-01-01

    Due to the popularity of 3D graphics in animation and games, usage of 3D geometry deformable models increases dramatically. Despite their growing importance, these models are difficult and time consuming to build. A distance learning system for the construction of these models could greatly facilitate students to learn and practice at different…

  20. New 3D thermal evolution model for icy bodies application to trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Guilbert-Lepoutre, A.; Lasue, J.; Federico, C.; Coradini, A.; Orosei, R.; Rosenberg, E. D.

    2011-05-01

    Context. Thermal evolution models have been developed over the years to investigate the evolution of thermal properties based on the transfer of heat fluxes or transport of gas through a porous matrix, among others. Applications of such models to trans-Neptunian objects (TNOs) and Centaurs has shown that these bodies could be strongly differentiated from the point of view of chemistry (i.e. loss of most volatile ices), as well as from physics (e.g. melting of water ice), resulting in stratified internal structures with differentiated cores and potential pristine material close to the surface. In this context, some observational results, such as the detection of crystalline water ice or volatiles, remain puzzling. Aims: In this paper, we would like to present a new fully three-dimensional thermal evolution model. With this model, we aim to improve determination of the temperature distribution inside icy bodies such as TNOs by accounting for lateral heat fluxes, which have been proven to be important for accurate simulations. We also would like to be able to account for heterogeneous boundary conditions at the surface through various albedo properties, for example, that might induce different local temperature distributions. Methods: In a departure from published modeling approaches, the heat diffusion problem and its boundary conditions are represented in terms of real spherical harmonics, increasing the numerical efficiency by roughly an order of magnitude. We then compare this new model and another 3D model recently published to illustrate the advantages and limits of the new model. We try to put some constraints on the presence of crystalline water ice at the surface of TNOs. Results: The results obtained with this new model are in excellent agreement with results obtained by different groups with various models. Small TNOs could remain primitive unless they are formed quickly (less than 2 Myr) or are debris from the disruption of larger bodies. We find that, for

  1. EV71 3D Protein Binds with NLRP3 and Enhances the Assembly of Inflammasome Complex

    PubMed Central

    Wan, Pin; Pan, Pan; Zhang, Yecheng; Wu, Kailang; Liu, Yingle; Wu, Jianguo

    2017-01-01

    Activation of NLRP3 inflammasome is important for effective host defense against invading pathogen. Together with apoptosis-associated speck-like protein containing CARD domain (ASC), NLRP3 induces the cleavage of caspase-1 to facilitate the maturation of interleukin-1beta (IL-1β), an important pro-inflammatory cytokine. IL-1β subsequently plays critical roles in inflammatory responses by activating immune cells and inducing many secondary pro-inflammatory cytokines. Although the role of NLRP3 inflammasome in immune response is well defined, the mechanism underlying its assembly modulated by pathogen infection remains largely unknown. Here, we identified a novel mechanism by which enterovirus 71 (EV71) facilitates the assembly of NLRP3 inflammasome. Our results show that EV71 induces production and secretion of IL-1β in macrophages and peripheral blood mononuclear cells (PBMCs) through activation of NLRP3 inflammasome. EV71 replication and protein synthesis are required for NLRP3-mediated activation of IL-1β. Interestingly, EV71 3D protein, a RNA-dependent RNA polymerase (RdRp) was found to stimulate the activation of NLRP3 inflammasome, the cleavage of pro-caspase-1, and the release of IL-1β through direct binding to NLRP3. More importantly, 3D interacts with NLRP3 to facilitate the assembly of inflammasome complex by forming a 3D-NLRP3-ASC ring-like structure, resulting in the activation of IL-1β. These findings demonstrate a new role of 3D as an important player in the activation of inflammatory response, and identify a novel mechanism underlying the modulation of inflammasome assembly and function induced by pathogen invasion. PMID:28060938

  2. A colour image reproduction framework for 3D colour printing

    NASA Astrophysics Data System (ADS)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  3. Chemomechanically engineered 3D organotypic platforms of bladder cancer dormancy and reactivation.

    PubMed

    Pavan Grandhi, Taraka Sai; Potta, Thrimoorthy; Nitiyanandan, Rajeshwar; Deshpande, Indrani; Rege, Kaushal

    2017-10-01

    Tumors undergo periods of dormancy followed by reactivation leading to metastatic disease. Arrest in the G0/G1 phase of the cell cycle and resistance to chemotherapeutic drugs are key hallmarks of dormant tumor cells. Here, we describe a 3D platform of bladder cancer cell dormancy and reactivation facilitated by a novel aminoglycoside-derived hydrogel, Amikagel. These 3D dormant tumor microenvironments (3D-DTMs) were arrested in the G0/G1 phase and were highly resistant to anti-proliferative drugs. Inhibition of targets in the cellular protein production machinery led to induction of endoplasmic reticulum (ER) stress and complete ablation of 3D-DTMs. Nanoparticle-mediated calcium delivery significantly accelerated ER stress-mediated 3D-DTM death. Transfer of 3D-DTMs onto weaker and adhesive Amikagels resulted in selective reactivation of a sub-population of N-cadherin deficient cells from dormancy. Whole-transcriptome analyses further indicated key biochemical differences between dormant and proliferative cancer cells. Taken together, our results indicate that 3D bladder cancer microenvironments of dormancy and reactivation can facilitate fundamental advances and novel drug discovery in cancer. Copyright © 2017. Published by Elsevier Ltd.

  4. 3-D World Modeling For An Autonomous Robot

    NASA Astrophysics Data System (ADS)

    Goldstein, M.; Pin, F. G.; Weisbin, C. R.

    1987-01-01

    This paper presents a methodology for a concise representation of the 3-D world model for a mobile robot, using range data. The process starts with the segmentation of the scene into "objects" that are given a unique label, based on principles of range continuity. Then the external surface of each object is partitioned into homogeneous surface patches. Contours of surface patches in 3-D space are identified by estimating the normal and curvature associated with each pixel. The resulting surface patches are then classified as planar, convex or concave. Since the world model uses a volumetric representation for the 3-D environment, planar surfaces are represented by thin volumetric polyhedra. Spherical and cylindrical surfaces are extracted and represented by appropriate volumetric primitives. All other surfaces are represented using the boolean union of spherical volumes (as described in a separate paper by the same authors). The result is a general, concise representation of the external 3-D world, which allows for efficient and robust 3-D object recognition.

  5. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  6. 3D Printing in Instructional Settings: Identifying a Curricular Hierarchy of Activities

    ERIC Educational Resources Information Center

    Brown, Abbie

    2015-01-01

    A report of a year-long study in which the author engaged in 3D printing activity in order to determine how to facilitate and support skill building, concept attainment, and increased confidence with its use among teachers. Use of 3D printing tools and their applications in instructional settings are discussed. A hierarchy of 3D printing…

  7. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  9. ABS 3D printed solutions for cryogenic applications

    NASA Astrophysics Data System (ADS)

    Bartolomé, E.; Bozzo, B.; Sevilla, P.; Martínez-Pasarell, O.; Puig, T.; Granados, X.

    2017-03-01

    3D printing has become a common, inexpensive and rapid prototyping technique, enabling the ad hoc fabrication of complex shapes. In this paper, we demonstrate that 3D printed objects in ABS can be used at cryogenic temperatures, offering flexible solutions in different fields. Firstly, a thermo-mechanical characterization of ABS 3D printed specimens at 77 K is reported, which allowed us to delimit the type of cryogenic uses where 3D printed pieces may be implemented. Secondly, we present three different examples where ABS 3D printed objects working at low temperatures have provided specific solutions: (i) SQUID inserts for angular magnetometry (low temperature material characterization field); (ii) a cage support for a metamaterial ;magnetic concentrator; (superconductivity application), and (iii) dedicated tools for cryopreservation in assisted reproductive techniques (medicine field).

  10. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  11. Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.

    PubMed

    Thali, Michael J; Braun, Marcel; Dirnhofer, Richard

    2003-11-26

    Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.

  12. Dissection of C. elegans behavioral genetics in 3-D environments

    PubMed Central

    Kwon, Namseop; Hwang, Ara B.; You, Young-Jai; V. Lee, Seung-Jae; Ho Je, Jung

    2015-01-01

    The nematode Caenorhabditis elegans is a widely used model for genetic dissection of animal behaviors. Despite extensive technical advances in imaging methods, it remains challenging to visualize and quantify C. elegans behaviors in three-dimensional (3-D) natural environments. Here we developed an innovative 3-D imaging method that enables quantification of C. elegans behavior in 3-D environments. Furthermore, for the first time, we characterized 3-D-specific behavioral phenotypes of mutant worms that have defects in head movement or mechanosensation. This approach allowed us to reveal previously unknown functions of genes in behavioral regulation. We expect that our 3-D imaging method will facilitate new investigations into genetic basis of animal behaviors in natural 3-D environments. PMID:25955271

  13. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  14. SU-F-T-571: Objective Assessment of 3D Dosimetry for Flattened and Flattened Filter Free Stereotactic Rotational Delivery Using 729-Array Detector with Octavius 4D Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vikraman, S; Arun, C; Jain, K Sandeep

    2016-06-15

    Purpose: The purpose of this study was to assess the potential of 3D dosimetry for flattened and flattened filter free stereotactic rotational delivery in high definition MLC using 729-detector array with Octavius 4D phantom Methods: Twenty rapid arc plans were assessed for this study. For each patient two plans for 6X and 6FFF photon beams were generated with same prescription and critical organ constraints in Eclipse TPS version 13.0 using high definition MLC. Verification plans were generated in scanned Octavius 4D phantom in TPS. 3D dose measurements were collected from 729-ion chamber detector array in Octavius 4D phantom using verisoftmore » software v 6.0. TPS calculated dose was compared with measured 3D dose in verisoft using the following gamma analysis parameters such as 3D volumetric, 3D planar and 2D global gamma in transverse, sagittal and coronal planes for 3mm/3% and 2mm/2% distance to agreement criteria.Passing rate and arithmetic mean of global gamma were analysed for 2D and 3D global gamma in all planes. Results: The average number of dose points passing rate for 2D global gamma with 3mm/3% criteria in transverse, sagittal and coronal planes was 99.06%±2.89%, 98.8%±0.88% and 99.06%±91%, respectively. For 2mm/2% criteria 97.86%±2.26%, 94.49± 2.64% and 94.34%±2.9% was observed. In 3D planar global gamma with 3mm/3% was 99.53%±0.49%, 98.93%±1.03% and 99.29%±1.29%, for 2mm 2% criteria was 97.50%±2.24%, 94.5%±2.5% and 95.38%±4.5%. The maximum arithmetic mean gamma deviation of 0.505%±0.13% was observed in coronal plane for 2D global gamma with 2mm/2% criteria. The 3D volumetric gamma passing rate was observed as 99.61%±0.433% for 3mm /3% and 95.91%±2.51% for 2mm/2%. Conclusion: The objective assessment of 3D dosimetry have demonstrated that the rotational delivery accuracy for flattened and flattened filter free stereotactic plans can be verified by using Octavius system comprising with 729 ion chamber array and Octavius 4D phantom.« less

  15. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  16. How useful is 3D printing in maxillofacial surgery?

    PubMed

    Louvrier, A; Marty, P; Barrabé, A; Euvrard, E; Chatelain, B; Weber, E; Meyer, C

    2017-09-01

    3D printing seems to have more and more applications in maxillofacial surgery (MFS), particularly since the release on the market of general use 3D printers several years ago. The aim of our study was to answer 4 questions: 1. Who uses 3D printing in MFS and is it routine or not? 2. What are the main clinical indications for 3D printing in MFS and what are the kinds of objects that are used? 3. Are these objects printed by an official medical device (MD) manufacturer or made directly within the department or the lab? 4. What are the advantages and drawbacks? Two bibliographic researches were conducted on January the 1st, 2017 in PubMed, without time limitation, using "maxillofacial surgery" AND "3D printing" for the first and for the second "maxillofacial surgery" AND "computer-aided design" AND "computer-aided manufacturing" as keywords. Articles in English or French dealing with human clinical use of 3D printing were selected. Publication date, nationality of the authors, number of patients treated, clinical indication(s), type of printed object(s), type of printing (lab/hospital-made or professional/industry) and advantages/drawbacks were recorded. Two hundred and ninety-seven articles from 35 countries met the criteria. The most represented country was the People's Republic of China (16% of the articles). A total of 2889 patients (10 per article on average) benefited from 3D printed objects. The most frequent clinical indications were dental implant surgery and mandibular reconstruction. The most frequently printed objects were surgical guides and anatomic models. Forty-five percent of the prints were professional. The main advantages were improvement in precision and reduction of surgical time. The main disadvantages were the cost of the objects and the manufacturing period when printed by the industry. The arrival on the market of low-cost printers has increased the use of 3D printing in MFS. Anatomic models are not considered to be MDs and do not have

  17. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.

  18. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  19. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  20. 3-D Object Pose Determination Using Complex EGI

    DTIC Science & Technology

    1990-10-01

    the length of edges of the polyhedron from the EGI. Dane and Bajcsy [4] make use of the Gaussian Image to spatially segment a group of range points...involving real range data of two smooth objects were conducted. The two smooth objects are the torus and ellipsoid, whose databases have been created...in the simulations earlier. 5.0.1 Implementational Issues The torus and ellipsoid were crafted out of clay to resemble the models whose databases were

  1. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  2. 3D Printer Coupon removal and stowage

    NASA Image and Video Library

    2014-12-09

    iss042e031282 (12/09/2014) ---US Astronaut Barry (Butch) Wilmore holding a 3D coupon works with the new 3D printer aboard the International Space Station. The 3D Printing experiment in zero gravity demonstrates that a 3D printer works normally in space. In general, a 3D printer extrudes streams of heated plastic, metal or other material, building layer on top of layer to create 3 dimensional objects. Testing a 3D printer using relatively low-temperature plastic feedstock on the International Space Station is the first step towards establishing an on-demand machine shop in space, a critical enabling component for deep-space crewed missions and in-space manufacturing.

  3. The effects of surface gloss and roughness on color constancy for real 3-D objects.

    PubMed

    Granzier, Jeroen J M; Vergne, Romain; Gegenfurtner, Karl R

    2014-02-21

    Color constancy denotes the phenomenon that the appearance of an object remains fairly stable under changes in illumination and background color. Most of what we know about color constancy comes from experiments using flat, matte surfaces placed on a single plane under diffuse illumination simulated on a computer monitor. Here we investigate whether material properties (glossiness and roughness) have an effect on color constancy for real objects. Subjects matched the color and brightness of cylinders (painted red, green, or blue) illuminated by simulated daylight (D65) or by a reddish light with a Munsell color book illuminated by a tungsten lamp. The cylinders were either glossy or matte and either smooth or rough. The object was placed in front of a black background or a colored checkerboard. We found that color constancy was significantly higher for the glossy objects compared to the matte objects, and higher for the smooth objects compared to the rough objects. This was independent of the background. We conclude that material properties like glossiness and roughness can have significant effects on color constancy.

  4. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  5. Investigation of the C-3-epi-25(OH)D3 of 25-hydroxyvitamin D3 in urban schoolchildren.

    PubMed

    Berger, Samantha E; Van Rompay, Maria I; Gordon, Catherine M; Goodman, Elizabeth; Eliasziw, Misha; Holick, Michael F; Sacheck, Jennifer M

    2018-03-01

    The physiological relevance C-3 epimer of 25-hydroxyvitamin D (3-epi-25(OH)D) is not well understood among youth. The objective of this study was to assess whether demographic/physiologic characteristics were associated with 3-epi-25(OH)D 3 concentrations in youth. Associations between 3-epi-25(OH)D 3 and demographics and between 3-epi-25(OH)D 3 , total 25-hydroxyvitamin (25(OH)D) (25(OH)D 2 + 25(OH)D 3 ), total cholesterol, high-density lipoprotein, low-density lipoprotein, and triglycerides were examined in racially/ethnically diverse schoolchildren (n = 682; age, 8-15 years) at Boston-area urban schools. Approximately 50% of participants had detectable 3-epi-25(OH)D 3 (range 0.95-3.95 ng/mL). The percentage of 3-epi-25(OH)D 3 of total 25(OH)D ranged from 2.5% to 17.0% (median 5.5%). Males were 38% more likely than females to have detectable 3-epi-25(OH)D 3 concentrations. Both Asian and black race/ethnicity were associated with lower odds of having detectable 3-epi-25(OH)D 3 compared with non-Hispanic white children (Asian vs. white, odds ratio (OR) 0.28, 95% confidence interval (CI) 0.14-0.53; black vs. white, OR 0.38, 95%CI 0.23-0.63, p < 0.001). Having an adequate (20-29 ng/mL) or optimal (>30 ng/mL) 25(OH)D concentration was associated with higher odds of having detectable 3-epi-25(OH)D 3 than having an inadequate (<20 ng/mL) concentration (OR 4.78, 95%CI 3.23-6.94 or OR 14.10, 95%CI 7.10-28.0, respectively). There was no association between 3-epi-25(OH)D 3 and blood lipids. However, when considering 3-epi-25(OH)D 3 as a percentage of total 25(OH)D, total cholesterol was lower in children with percent 3-epi-25(OH)D 3 above the median (mean difference -7.1 mg/dL, p = 0.01). In conclusion, among schoolchildren, sex, race/ethnicity, and total serum 25(OH)D concentration is differentially associated with 3-epi-25(OH)D. The physiological relevance of 3-epi-25(OH)D 3 may be related to the 3-epi-25(OH)D 3 as a percentage of total 25(OH)D and should be considered in

  6. Facilitators for practice change in Spanish community pharmacy.

    PubMed

    Gastelurrutia, Miguel A; Benrimoj, S I Charlie; Castrillon, Carla C; de Amezua, María J Casado; Fernandez-Llimos, Fernando; Faus, Maria J

    2009-02-01

    To identify and prioritise facilitators for practice change in Spanish community pharmacy. Spanish community pharmacies. Qualitative study. Thirty-three semi-structured interviews were conducted with community pharmacists (n = 15) and pharmacy strategists (n = 18), and the results were examined using the content analysis method. In addition, two nominal groups (seven community pharmacists and seven strategists) were formed to identify and prioritise facilitators. Results of both techniques were then triangulated. Facilitators for practice change. Twelve facilitators were identified and grouped into four domains (D1: Pharmacist; D2: Pharmacy as an organisation; D3: Pharmaceutical profession; D4: Miscellaneous). Facilitators identified in D1 include: the need for more clinical education at both pre- and post-graduate levels; the need for clearer and unequivocal messages from professional leaders about the future of the professional practice; and the need for a change in pharmacists' attitudes. Facilitators in D2 are: the need to change the reimbursement system to accommodate cognitive service delivery as well as dispensing; and the need to change the front office of pharmacies. Facilitators identified in D3 are: the need for the Spanish National Professional Association to take a leadership role in the implementation of cognitive services; the need to reduce administrative workload; and the need for universities to reduce the gap between education and research. Other facilitators identified in this study include: the need to increase patients' demand for cognitive services at pharmacies; the need to improve pharmacist-physician relationships; the need for support from health care authorities; and the need for improved marketing of cognitive services and their benefits to society, including physicians and health care authorities. Twelve facilitators were identified. Strategists considered clinical education and pharmacists' attitude as the most important, and

  7. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  8. Effects of Objective 3-Dimensional Measures of Facial Shape and Symmetry on Perceptions of Facial Attractiveness.

    PubMed

    Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M

    2017-09-01

    Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. New generation of 3D desktop computer interfaces

    NASA Astrophysics Data System (ADS)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  10. Object Manipulation Facilitates Kind-Based Object Individuation of Shape-Similar Objects

    ERIC Educational Resources Information Center

    Kingo, Osman S.; Krojgaard, Peter

    2011-01-01

    Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were…

  11. Smooth 2D manifold extraction from 3D image stack

    PubMed Central

    Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste

    2017-01-01

    Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033

  12. Examination about Influence for Precision of 3d Image Measurement from the Ground Control Point Measurement and Surface Matching

    NASA Astrophysics Data System (ADS)

    Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.

    2015-05-01

    As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made

  13. Endodontic applications of 3D printing.

    PubMed

    Anderson, J; Wealleans, J; Ray, J

    2018-02-27

    Computer-aided design (CAD) and computer-aided manufacturing (CAM) technologies can leverage cone beam computed tomography data for production of objects used in surgical and nonsurgical endodontics and in educational settings. The aim of this article was to review all current applications of 3D printing in endodontics and to speculate upon future directions for research and clinical use within the specialty. A literature search of PubMed, Ovid and Scopus was conducted using the following terms: stereolithography, 3D printing, computer aided rapid prototyping, surgical guide, guided endodontic surgery, guided endodontic access, additive manufacturing, rapid prototyping, autotransplantation rapid prototyping, CAD, CAM. Inclusion criteria were articles in the English language documenting endodontic applications of 3D printing. Fifty-one articles met inclusion criteria and were utilized. The endodontic literature on 3D printing is generally limited to case reports and pre-clinical studies. Documented solutions to endodontic challenges include: guided access with pulp canal obliteration, applications in autotransplantation, pre-surgical planning and educational modelling and accurate location of osteotomy perforation sites. Acquisition of technical expertise and equipment within endodontic practices present formidable obstacles to widespread deployment within the endodontic specialty. As knowledge advances, endodontic postgraduate programmes should consider implementing 3D printing into their curriculums. Future research directions should include clinical outcomes assessments of treatments employing 3D printed objects. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  14. 3D shape representation with spatial probabilistic distribution of intrinsic shape keypoints

    NASA Astrophysics Data System (ADS)

    Ghorpade, Vijaya K.; Checchin, Paul; Malaterre, Laurent; Trassoudaine, Laurent

    2017-12-01

    The accelerated advancement in modeling, digitizing, and visualizing techniques for 3D shapes has led to an increasing amount of 3D models creation and usage, thanks to the 3D sensors which are readily available and easy to utilize. As a result, determining the similarity between 3D shapes has become consequential and is a fundamental task in shape-based recognition, retrieval, clustering, and classification. Several decades of research in Content-Based Information Retrieval (CBIR) has resulted in diverse techniques for 2D and 3D shape or object classification/retrieval and many benchmark data sets. In this article, a novel technique for 3D shape representation and object classification has been proposed based on analyses of spatial, geometric distributions of 3D keypoints. These distributions capture the intrinsic geometric structure of 3D objects. The result of the approach is a probability distribution function (PDF) produced from spatial disposition of 3D keypoints, keypoints which are stable on object surface and invariant to pose changes. Each class/instance of an object can be uniquely represented by a PDF. This shape representation is robust yet with a simple idea, easy to implement but fast enough to compute. Both Euclidean and topological space on object's surface are considered to build the PDFs. Topology-based geodesic distances between keypoints exploit the non-planar surface properties of the object. The performance of the novel shape signature is tested with object classification accuracy. The classification efficacy of the new shape analysis method is evaluated on a new dataset acquired with a Time-of-Flight camera, and also, a comparative evaluation on a standard benchmark dataset with state-of-the-art methods is performed. Experimental results demonstrate superior classification performance of the new approach on RGB-D dataset and depth data.

  15. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  16. Gamma/x-ray linear pushbroom stereo for 3D cargo inspection

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Hu, Yu-Chi

    2006-05-01

    For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.

  17. True 3D digital holographic tomography for virtual reality applications

    NASA Astrophysics Data System (ADS)

    Downham, A.; Abeywickrema, U.; Banerjee, P. P.

    2017-09-01

    Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.

  18. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  19. SWI/SNF interacts with cleavage and polyadenylation factors and facilitates pre-mRNA 3' end processing.

    PubMed

    Yu, Simei; Jordán-Pla, Antonio; Gañez-Zapater, Antoni; Jain, Shruti; Rolicka, Anna; Östlund Farrants, Ann-Kristin; Visa, Neus

    2018-05-31

    SWI/SNF complexes associate with genes and regulate transcription by altering the chromatin at the promoter. It has recently been shown that these complexes play a role in pre-mRNA processing by associating at alternative splice sites. Here, we show that SWI/SNF complexes are involved also in pre-mRNA 3' end maturation by facilitating 3' end cleavage of specific pre-mRNAs. Comparative proteomics show that SWI/SNF ATPases interact physically with subunits of the cleavage and polyadenylation complexes in fly and human cells. In Drosophila melanogaster, the SWI/SNF ATPase Brahma (dBRM) interacts with the CPSF6 subunit of cleavage factor I. We have investigated the function of dBRM in 3' end formation in S2 cells by RNA interference, single-gene analysis and RNA sequencing. Our data show that dBRM facilitates pre-mRNA cleavage in two different ways: by promoting the association of CPSF6 to the cleavage region and by stabilizing positioned nucleosomes downstream of the cleavage site. These findings show that SWI/SNF complexes play a role also in the cleavage of specific pre-mRNAs in animal cells.

  20. Sculplexity: Sculptures of Complexity using 3D printing

    NASA Astrophysics Data System (ADS)

    Reiss, D. S.; Price, J. J.; Evans, T. S.

    2013-11-01

    We show how to convert models of complex systems such as 2D cellular automata into a 3D printed object. Our method takes into account the limitations inherent to 3D printing processes and materials. Our approach automates the greater part of this task, bypassing the use of CAD software and the need for manual design. As a proof of concept, a physical object representing a modified forest fire model was successfully printed. Automated conversion methods similar to the ones developed here can be used to create objects for research, for demonstration and teaching, for outreach, or simply for aesthetic pleasure. As our outputs can be touched, they may be particularly useful for those with visual disabilities.

  1. 3D Printing of Organs-On-Chips.

    PubMed

    Yi, Hee-Gyeong; Lee, Hyungseok; Cho, Dong-Woo

    2017-01-25

    Organ-on-a-chip engineering aims to create artificial living organs that mimic the complex and physiological responses of real organs, in order to test drugs by precisely manipulating the cells and their microenvironments. To achieve this, the artificial organs should to be microfabricated with an extracellular matrix (ECM) and various types of cells, and should recapitulate morphogenesis, cell differentiation, and functions according to the native organ. A promising strategy is 3D printing, which precisely controls the spatial distribution and layer-by-layer assembly of cells, ECMs, and other biomaterials. Owing to this unique advantage, integration of 3D printing into organ-on-a-chip engineering can facilitate the creation of micro-organs with heterogeneity, a desired 3D cellular arrangement, tissue-specific functions, or even cyclic movement within a microfluidic device. Moreover, fully 3D-printed organs-on-chips more easily incorporate other mechanical and electrical components with the chips, and can be commercialized via automated massive production. Herein, we discuss the recent advances and the potential of 3D cell-printing technology in engineering organs-on-chips, and provides the future perspectives of this technology to establish the highly reliable and useful drug-screening platforms.

  2. Reproducing 2D breast mammography images with 3D printed phantoms

    NASA Astrophysics Data System (ADS)

    Clark, Matthew; Ghammraoui, Bahaa; Badal, Andreu

    2016-03-01

    Mammography is currently the standard imaging modality used to screen women for breast abnormalities and, as a result, it is a tool of great importance for the early detection of breast cancer. Physical phantoms are commonly used as surrogates of breast tissue to evaluate some aspects of the performance of mammography systems. However, most phantoms do not reproduce the anatomic heterogeneity of real breasts. New fabrication technologies, such as 3D printing, have created the opportunity to build more complex, anatomically realistic breast phantoms that could potentially assist in the evaluation of mammography systems. The primary objective of this work is to present a simple, easily reproducible methodology to design and print 3D objects that replicate the attenuation profile observed in real 2D mammograms. The secondary objective is to evaluate the capabilities and limitations of the competing 3D printing technologies, and characterize the x-ray properties of the different materials they use. Printable phantoms can be created using the open-source code introduced in this work, which processes a raw mammography image to estimate the amount of x-ray attenuation at each pixel, and outputs a triangle mesh object that encodes the observed attenuation map. The conversion from the observed pixel gray value to a column of printed material with equivalent attenuation requires certain assumptions and knowledge of multiple imaging system parameters, such as x-ray energy spectrum, source-to-object distance, compressed breast thickness, and average breast material attenuation. A detailed description of the new software, a characterization of the printed materials using x-ray spectroscopy, and an evaluation of the realism of the sample printed phantoms are presented.

  3. 3D-HST WFC3-selected Photometric Catalogs in the Five CANDELS/3D-HST Fields: Photometry, Photometric Redshifts, and Stellar Masses

    NASA Astrophysics Data System (ADS)

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.; Brammer, Gabriel B.; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; van der Wel, Arjen; Bezanson, Rachel; Da Cunha, Elisabete; Fumagalli, Mattia; Förster Schreiber, Natascha; Kriek, Mariska; Leja, Joel; Lundgren, Britt F.; Magee, Daniel; Marchesini, Danilo; Maseda, Michael V.; Nelson, Erica J.; Oesch, Pascal; Pacifici, Camilla; Patel, Shannon G.; Price, Sedona; Rix, Hans-Walter; Tal, Tomer; Wake, David A.; Wuyts, Stijn

    2014-10-01

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin2 in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands, and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu).

  4. Conditioning 3D object-based models to dense well data

    NASA Astrophysics Data System (ADS)

    Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.

    2018-06-01

    Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.

  5. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  6. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  7. Review of 3d GIS Data Fusion Methods and Progress

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Hou, Miaole; Hu, Yungang

    2018-04-01

    3D data fusion is a research hotspot in the field of computer vision and fine mapping, and plays an important role in fine measurement, risk monitoring, data display and other processes. At present, the research of 3D data fusion in the field of Surveying and mapping focuses on the 3D model fusion of terrain and ground objects. This paper summarizes the basic methods of 3D data fusion of terrain and ground objects in recent years, and classified the data structure and the establishment method of 3D model, and some of the most widely used fusion methods are analysed and commented.

  8. Dense and dynamic 3D selection for game-based virtual environments.

    PubMed

    Cashion, Jeffrey; Wingrave, Chadwick; LaViola, Joseph J

    2012-04-01

    3D object selection is more demanding when, 1) objects densly surround the target object, 2) the target object is significantly occluded, and 3) when the target object is dynamically changing location. Most 3D selection techniques and guidelines were developed and tested on static or mostly sparse environments. In contrast, games tend to incorporate densly packed and dynamic objects as part of their typical interaction. With the increasing popularity of 3D selection in games using hand gestures or motion controllers, our current understanding of 3D selection needs revision. We present a study that compared four different selection techniques under five different scenarios based on varying object density and motion dynamics. We utilized two existing techniques, Raycasting and SQUAD, and developed two variations of them, Zoom and Expand, using iterative design. Our results indicate that while Raycasting and SQUAD both have weaknesses in terms of speed and accuracy in dense and dynamic environments, by making small modifications to them (i.e., flavoring), we can achieve significant performance increases.

  9. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  10. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  11. Surface Finish Effects Using Coating Method on 3D Printing (FDM) Parts

    NASA Astrophysics Data System (ADS)

    Haidiezul, AHM; Aiman, AF; Bakar, B.

    2018-03-01

    One of three-dimensional (3-D) printing economical processes is by using Fused Deposition Modelling (FDM). The 3-D printed object was built using layer-by-layer approach which caused “stair stepping” effects. This situation leads to uneven surface finish which mostly affect the objects appearance for product designers in presenting their models or prototypes. The objective of this paper is to examine the surface finish effects from the application of XTC-3D coating developed by Smooth-On, USA on the 3D printed parts. From the experimental works, this study shows the application of XTC-3D coating to the 3-D printed parts has improve the surface finish by reducing the gap between the layer

  12. Identification and restoration in 3D fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dieterlen, Alain; Xu, Chengqi; Haeberle, Olivier; Hueber, Nicolas; Malfara, R.; Colicchio, B.; Jacquey, Serge

    2004-06-01

    3-D optical fluorescent microscopy becomes now an efficient tool for volumic investigation of living biological samples. The 3-D data can be acquired by Optical Sectioning Microscopy which is performed by axial stepping of the object versus the objective. For any instrument, each recorded image can be described by a convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. To assess performance and ensure the data reproducibility, as for any 3-D quantitative analysis, the system indentification is mandatory. The PSF explains the properties of the image acquisition system; it can be computed or acquired experimentally. Statistical tools and Zernike moments are shown appropriate and complementary to describe a 3-D system PSF and to quantify the variation of the PSF as function of the optical parameters. Some critical experimental parameters can be identified with these tools. This is helpful for biologist to define an aquisition protocol optimizing the use of the system. Reduction of out-of-focus light is the task of 3-D microscopy; it is carried out computationally by deconvolution process. Pre-filtering the images improves the stability of deconvolution results, now less dependent on the regularization parameter; this helps the biologists to use restoration process.

  13. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to

  14. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  15. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  16. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  19. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  20. 3D Viewing: Odd Perception - Illusion? reality? or both?

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Iizasa, K.

    2008-12-01

    We live in the three dimensional space, don't we? It could be at least four dimensions, but that is another story. In either way our perceptual capability of 3D-Viewing is constrained by our 2D-perception (our intrinsic tools of perception). I carried out a few visual experiments using topographic data to show our intrinsic (or biological) disability (or shortcoming) in 3D-recognition of our world. Results of the experiments suggest: (1) 3D-surface model displayed on a 2D-computer screen (or paper) always has two interpretations of the 3D- surface geometry, if we choose one of the interpretation (in other word, if we are hooked by one perception of the two), we maintain its perception even if the 3D-model changes its viewing perspective in time shown on the screen, (2) more interesting is that 3D-real solid object (e.g.,made of clay) also gives above mentioned two interpretations of the geometry of the object, if we observe the object with one-eye. Most famous example of this viewing illusion is exemplified by a magician, who died in 2007, Jerry Andrus who made a super-cool paper crafted dragon which causes visual illusion to one-eyed viewer. I, by the experiments, confirmed this phenomenon in another perceptually persuasive (deceptive?) way. My conclusion is that this illusion is intrinsic, i.e. reality for human, because, even if we live in 3D-space, our perceptional tool (eyes) is composed of 2D sensors whose information is reconstructed or processed to 3D by our experience-based brain. So, (3) when we observe the 3D-surface-model on the computer screen, we are always one eye short even if we use both eyes. One last suggestion from my experiments is that recent highly sophisticated 3D- models might include too many information that human perceptions cannot handle properly, i.e. we might not be understanding the 3D world (geospace) at all, just illusioned.

  1. Case study of 3D fingerprints applications

    PubMed Central

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. PMID:28399141

  2. Case study of 3D fingerprints applications.

    PubMed

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  3. Spatially encoded phase-contrast MRI-3D MRI movies of 1D and 2D structures at millisecond resolution.

    PubMed

    Merboldt, Klaus-Dietmar; Uecker, Martin; Voit, Dirk; Frahm, Jens

    2011-10-01

    This work demonstrates that the principles underlying phase-contrast MRI may be used to encode spatial rather than flow information along a perpendicular dimension, if this dimension contains an MRI-visible object at only one spatial location. In particular, the situation applies to 3D mapping of curved 2D structures which requires only two projection images with different spatial phase-encoding gradients. These phase-contrast gradients define the field of view and mean spin-density positions of the object in the perpendicular dimension by respective phase differences. When combined with highly undersampled radial fast low angle shot (FLASH) and image reconstruction by regularized nonlinear inversion, spatial phase-contrast MRI allows for dynamic 3D mapping of 2D structures in real time. First examples include 3D MRI movies of the acting human hand at a temporal resolution of 50 ms. With an even simpler technique, 3D maps of curved 1D structures may be obtained from only three acquisitions of a frequency-encoded MRI signal with two perpendicular phase encodings. Here, 3D MRI movies of a rapidly rotating banana were obtained at 5 ms resolution or 200 frames per second. In conclusion, spatial phase-contrast 3D MRI of 2D or 1D structures is respective two or four orders of magnitude faster than conventional 3D MRI. Copyright © 2011 Wiley-Liss, Inc.

  4. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  5. Current Applications and Future Perspectives of the Use of 3D Printing in Anatomical Training and Neurosurgery

    PubMed Central

    Baskaran, Vivek; Štrkalj, Goran; Štrkalj, Mirjana; Di Ieva, Antonio

    2016-01-01

    3D printing is a form of rapid prototyping technology, which has led to innovative new applications in biomedicine. It facilitates the production of highly accurate three dimensional objects from substrate materials. The inherent accuracy and other properties of 3D printing have allowed it to have exciting applications in anatomy education and surgery, with the specialty of neurosurgery having benefited particularly well. This article presents the findings of a literature review of the Pubmed and Web of Science databases investigating the applications of 3D printing in anatomy and surgical education, and neurosurgery. A number of applications within these fields were found, with many significantly improving the quality of anatomy and surgical education, and the practice of neurosurgery. They also offered advantages over existing approaches and practices. It is envisaged that the number of useful applications will rise in the coming years, particularly as the costs of this technology decrease and its uptake rises. PMID:27445707

  6. Current Applications and Future Perspectives of the Use of 3D Printing in Anatomical Training and Neurosurgery.

    PubMed

    Baskaran, Vivek; Štrkalj, Goran; Štrkalj, Mirjana; Di Ieva, Antonio

    2016-01-01

    3D printing is a form of rapid prototyping technology, which has led to innovative new applications in biomedicine. It facilitates the production of highly accurate three dimensional objects from substrate materials. The inherent accuracy and other properties of 3D printing have allowed it to have exciting applications in anatomy education and surgery, with the specialty of neurosurgery having benefited particularly well. This article presents the findings of a literature review of the Pubmed and Web of Science databases investigating the applications of 3D printing in anatomy and surgical education, and neurosurgery. A number of applications within these fields were found, with many significantly improving the quality of anatomy and surgical education, and the practice of neurosurgery. They also offered advantages over existing approaches and practices. It is envisaged that the number of useful applications will rise in the coming years, particularly as the costs of this technology decrease and its uptake rises.

  7. 3D-HST WFC3-SELECTED PHOTOMETRIC CATALOGS IN THE FIVE CANDELS/3D-HST FIELDS: PHOTOMETRY, PHOTOMETRIC REDSHIFTS, AND STELLAR MASSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin{sup 2} in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands,more » and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu)« less

  8. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3D refractive index maps

    NASA Astrophysics Data System (ADS)

    Kim, Kyoohyun; Park, Yongkeun

    2017-05-01

    Optical trapping can manipulate the three-dimensional (3D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and extensive computations. Here, we achieve the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3D refractive index distribution of samples. Engineering the 3D light field distribution of a trapping beam based on the measured 3D refractive index map of samples generates a light mould, which can manipulate colloidal and biological samples with arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can be directly applied in biophotonics and soft matter physics.

  9. 3D printing of novel osteochondral scaffolds with graded microstructure

    NASA Astrophysics Data System (ADS)

    Nowicki, Margaret A.; Castro, Nathan J.; Plesniak, Michael W.; Zhang, Lijie Grace

    2016-10-01

    Osteochondral tissue has a complex graded structure where biological, physiological, and mechanical properties vary significantly over the full thickness spanning from the subchondral bone region beneath the joint surface to the hyaline cartilage region at the joint surface. This presents a significant challenge for tissue-engineered structures addressing osteochondral defects. Fused deposition modeling (FDM) 3D bioprinters present a unique solution to this problem. The objective of this study is to use FDM-based 3D bioprinting and nanocrystalline hydroxyapatite for improved bone marrow human mesenchymal stem cell (hMSC) adhesion, growth, and osteochondral differentiation. FDM printing parameters can be tuned through computer aided design and computer numerical control software to manipulate scaffold geometries in ways that are beneficial to mechanical performance without hindering cellular behavior. Additionally, the ability to fine-tune 3D printed scaffolds increases further through our investment casting procedure which facilitates the inclusion of nanoparticles with biochemical factors to further elicit desired hMSC differentiation. For this study, FDM was used to print investment-casting molds innovatively designed with varied pore distribution over the full thickness of the scaffold. The mechanical and biological impacts of the varied pore distributions were compared and evaluated to determine the benefits of this physical manipulation. The results indicate that both mechanical properties and cell performance improve in the graded pore structures when compared to homogeneously distributed porous and non-porous structures. Differentiation results indicated successful osteogenic and chondrogenic manipulation in engineered scaffolds.

  10. Efficiency of extracting stereo-driven object motions

    PubMed Central

    Jain, Anshul; Zaidi, Qasim

    2013-01-01

    Most living things and many nonliving things deform as they move, requiring observers to separate object motions from object deformations. When the object is partially occluded, the task becomes more difficult because it is not possible to use two-dimensional (2-D) contour correlations (Cohen, Jain, & Zaidi, 2010). That leaves dynamic depth matching across the unoccluded views as the main possibility. We examined the role of stereo cues in extracting motion of partially occluded and deforming three-dimensional (3-D) objects, simulated by disk-shaped random-dot stereograms set at randomly assigned depths and placed uniformly around a circle. The stereo-disparities of the disks were temporally oscillated to simulate clockwise or counterclockwise rotation of the global shape. To dynamically deform the global shape, random disparity perturbation was added to each disk's depth on each stimulus frame. At low perturbation, observers reported rotation directions consistent with the global shape, even against local motion cues, but performance deteriorated at high perturbation. Using 3-D global shape correlations, we formulated an optimal Bayesian discriminator for rotation direction. Based on rotation discrimination thresholds, human observers were 75% as efficient as the optimal model, demonstrating that global shapes derived from stereo cues facilitate inferences of object motions. To complement reports of stereo and motion integration in extrastriate cortex, our results suggest the possibilities that disparity selectivity and feature tracking are linked, or that global motion selective neurons can be driven purely from disparity cues. PMID:23325345

  11. 3D Actin Network Centerline Extraction with Multiple Active Contours

    PubMed Central

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2013-01-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and actin cables. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we propose a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D Total Internal Reflection Fluorescence Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy. Quantitative evaluation of the method using synthetic images shows that for images with SNR above 5.0, the average vertex error measured by the distance between our result and ground truth is 1 voxel, and the average Hausdorff distance is below 10 voxels. PMID:24316442

  12. 3D-Lab: a collaborative web-based platform for molecular modeling.

    PubMed

    Grebner, Christoph; Norrby, Magnus; Enström, Jonatan; Nilsson, Ingemar; Hogner, Anders; Henriksson, Jonas; Westin, Johan; Faramarzi, Farzad; Werner, Philip; Boström, Jonas

    2016-09-01

    The use of 3D information has shown impact in numerous applications in drug design. However, it is often under-utilized and traditionally limited to specialists. We want to change that, and present an approach making 3D information and molecular modeling accessible and easy-to-use 'for the people'. A user-friendly and collaborative web-based platform (3D-Lab) for 3D modeling, including a blazingly fast virtual screening capability, was developed. 3D-Lab provides an interface to automatic molecular modeling, like conformer generation, ligand alignments, molecular dockings and simple quantum chemistry protocols. 3D-Lab is designed to be modular, and to facilitate sharing of 3D-information to promote interactions between drug designers. Recent enhancements to our open-source virtual reality tool Molecular Rift are described. The integrated drug-design platform allows drug designers to instantaneously access 3D information and readily apply advanced and automated 3D molecular modeling tasks, with the aim to improve decision-making in drug design projects.

  13. Neuropeptide S interacts with the basolateral amygdala noradrenergic system in facilitating object recognition memory consolidation.

    PubMed

    Han, Ren-Wen; Xu, Hong-Jiao; Zhang, Rui-San; Wang, Pei; Chang, Min; Peng, Ya-Li; Deng, Ke-Yu; Wang, Rui

    2014-01-01

    The noradrenergic activity in the basolateral amygdala (BLA) was reported to be involved in the regulation of object recognition memory. As the BLA expresses high density of receptors for Neuropeptide S (NPS), we investigated whether the BLA is involved in mediating NPS's effects on object recognition memory consolidation and whether such effects require noradrenergic activity. Intracerebroventricular infusion of NPS (1nmol) post training facilitated 24-h memory in a mouse novel object recognition task. The memory-enhancing effect of NPS could be blocked by the β-adrenoceptor antagonist propranolol. Furthermore, post-training intra-BLA infusions of NPS (0.5nmol/side) improved 24-h memory for objects, which was impaired by co-administration of propranolol (0.5μg/side). Taken together, these results indicate that NPS interacts with the BLA noradrenergic system in improving object recognition memory during consolidation. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Viscoplastic Matrix Materials for Embedded 3D Printing.

    PubMed

    Grosskopf, Abigail K; Truby, Ryan L; Kim, Hyoungsoo; Perazzo, Antonio; Lewis, Jennifer A; Stone, Howard A

    2018-03-16

    Embedded three-dimensional (EMB3D) printing is an emerging technique that enables free-form fabrication of complex architectures. In this approach, a nozzle is translated omnidirectionally within a soft matrix that surrounds and supports the patterned material. To optimize print fidelity, we have investigated the effects of matrix viscoplasticity on the EMB3D printing process. Specifically, we determine how matrix composition, print path and speed, and nozzle diameter affect the yielded region within the matrix. By characterizing the velocity and strain fields and analyzing the dimensions of the yielded regions, we determine that scaling relationships based on the Oldroyd number, Od, exist between these dimensions and the rheological properties of the matrix materials and printing parameters. Finally, we use EMB3D printing to create complex architectures within an elastomeric silicone matrix. Our methods and findings will both facilitate future characterization of viscoplastic matrices and motivate the development of new materials for EMB3D printing.

  15. Time Lapse of World’s Largest 3-D Printed Object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-08-29

    Researchers at the MDF have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  16. Use of LIDAR Data in the 3D/4D Analyses of the Krakow Fortress Objects

    NASA Astrophysics Data System (ADS)

    Glowienka, Ewa; Michalowska, Krystyna; Opalinski, Piotr; Hejmanowska, Beata; Mikrut, Slawomir; Kramarczyk, Piotr

    2017-10-01

    The article presents partial results of studies within the framework of the international project "Cultural Heritage Through Time" (CHT2). The subject of the study were forts of the Krakow Fortress, which had been built by the Austrians between 1849-1914 in order to provide defence against the Russians. Research works were aimed at identifying architectural changes occurring in different time periods in relation to selected objects of the Krakow Fortress. For the analysis, the following LIDAR (Light Detection and Ranging) data was applied: Digital Terrain Models (DTM), Digital Surface Model (DSM), as well as the cartographic data: maps and orthophotomaps. All spatial data was obtained from the Polish Main Office of Geodesy and Cartography (Główny Urząd Geodezji i Kartografii - GUGIK). The majority of the cartographic data is available in the form of Web Map Services (WMS) on Geoportal (www.geoportal.gov.pl). The archival data was made available by the Historical Museum of the City of Krakow, or obtained from private collections. In order to conduct a thorough analysis of objects of the Krakow fortress, DTM and DSM data was obtained, either in ASCII format, or in the source *.las (LIDAR) format. On the basis of DTM and DSM, the degree of destruction of selected fortress objects was determined, occurring as a result of the action of demolishing those objects in the interwar period (1920-1939) and in the 1950s. The research has been made on the basis of all available cartographic materials, both archival (plans, maps, photos) and current (topographic map, orthophotomap, etc.) ones. Verification of archival maps and plans was carried out by comparing current digital images of the existing forms of fortifications with designs developed by the Austrians. As a result, it was possible to identify the differences between the original design, and the current state of the objects concerned. The analyses, which have been conducted, also allowed checking the legitimacy of

  17. Polymers for 3D Printing and Customized Additive Manufacturing.

    PubMed

    Ligon, Samuel Clark; Liska, Robert; Stampfl, Jürgen; Gurr, Matthias; Mülhaupt, Rolf

    2017-08-09

    Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.

  18. Polymers for 3D Printing and Customized Additive Manufacturing

    PubMed Central

    2017-01-01

    Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems. PMID:28756658

  19. CYP24A1 inhibition facilitates the anti-tumor effect of vitamin D3 on colorectal cancer cells

    PubMed Central

    Kósa, János P; Horváth, Péter; Wölfling, János; Kovács, Dóra; Balla, Bernadett; Mátyus, Péter; Horváth, Evelin; Speer, Gábor; Takács, István; Nagy, Zsolt; Horváth, Henrik; Lakatos, Péter

    2013-01-01

    AIM: The effects of vitamin D3 have been investigated on various tumors, including colorectal cancer (CRC). 25-hydroxyvitamin-D3-24-hydroxylase (CYP24A1), the enzyme that inactivates the active vitamin D3 metabolite 1,25-dihydroxyvitamin D3 (1,25-D3), is considered to be the main enzyme determining the biological half-life of 1,25-D3. During colorectal carcinogenesis, the expression and concentration of CYP24A1 increases significantly, suggesting that this phenomenon could be responsible for the proposed efficacy of 1,25-D3 in the treatment of CRC. The aim of this study was to investigate the anti-tumor effects of vitamin D3 on the human CRC cell line Caco-2 after inhibition of the cytochrome P450 component of CYP24A1 activity. METHODS: We examined the expression of CYP24A1 mRNA and the effects of 1,25-D3 on the cell line Caco-2 after inhibition of CYP24A1. Cell viability and proliferation were determined by means of sulforhodamine-B staining and bromodeoxyuridine incorporation, respectively, while cytotoxicity was estimated via the lactate dehydrogenase content of the cell culture supernatant. CYP24A1 expression was measured by real-time reverse transcription polymerase chain reaction. A number of tetralone compounds were synthesized to investigate their CP24A1 inhibitory activity. RESULTS: In response to 1,25-D3, CYP24A1 mRNA expression was enhanced significantly, in a time- and dose-dependent manner. Caco-2 cell viability and proliferation were not influenced by the administration of 1,25-D3 alone, but were markedly reduced by co-administration of 1,25-D3 and KD-35, a CYP24A1-inhibiting tetralone. Our data suggest that the mechanism of action of co-administered KD-35 and 1,25-D3 does not involve a direct cytotoxic effect, but rather the inhibition of cell proliferation. CONCLUSION: These findings demonstrate that the selective inhibition of CYP24A1 by compounds such as KD-35 may be a new approach for enhancement of the anti-tumor effect of 1,25-D3 on CRC. PMID

  20. Facilitating 3D Virtual World Learning Environments Creation by Non-Technical End Users through Template-Based Virtual World Instantiation

    ERIC Educational Resources Information Center

    Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing

    2013-01-01

    This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…

  1. 3D Printing of Organs-On-Chips

    PubMed Central

    Yi, Hee-Gyeong; Lee, Hyungseok; Cho, Dong-Woo

    2017-01-01

    Organ-on-a-chip engineering aims to create artificial living organs that mimic the complex and physiological responses of real organs, in order to test drugs by precisely manipulating the cells and their microenvironments. To achieve this, the artificial organs should to be microfabricated with an extracellular matrix (ECM) and various types of cells, and should recapitulate morphogenesis, cell differentiation, and functions according to the native organ. A promising strategy is 3D printing, which precisely controls the spatial distribution and layer-by-layer assembly of cells, ECMs, and other biomaterials. Owing to this unique advantage, integration of 3D printing into organ-on-a-chip engineering can facilitate the creation of micro-organs with heterogeneity, a desired 3D cellular arrangement, tissue-specific functions, or even cyclic movement within a microfluidic device. Moreover, fully 3D-printed organs-on-chips more easily incorporate other mechanical and electrical components with the chips, and can be commercialized via automated massive production. Herein, we discuss the recent advances and the potential of 3D cell-printing technology in engineering organs-on-chips, and provides the future perspectives of this technology to establish the highly reliable and useful drug-screening platforms. PMID:28952489

  2. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  3. CityGML - Interoperable semantic 3D city models

    NASA Astrophysics Data System (ADS)

    Gröger, Gerhard; Plümer, Lutz

    2012-07-01

    CityGML is the international standard of the Open Geospatial Consortium (OGC) for the representation and exchange of 3D city models. It defines the three-dimensional geometry, topology, semantics and appearance of the most relevant topographic objects in urban or regional contexts. These definitions are provided in different, well-defined Levels-of-Detail (multiresolution model). The focus of CityGML is on the semantical aspects of 3D city models, its structures, taxonomies and aggregations, allowing users to employ virtual 3D city models for advanced analysis and visualization tasks in a variety of application domains such as urban planning, indoor/outdoor pedestrian navigation, environmental simulations, cultural heritage, or facility management. This is in contrast to purely geometrical/graphical models such as KML, VRML, or X3D, which do not provide sufficient semantics. CityGML is based on the Geography Markup Language (GML), which provides a standardized geometry model. Due to this model and its well-defined semantics and structures, CityGML facilitates interoperable data exchange in the context of geo web services and spatial data infrastructures. Since its standardization in 2008, CityGML has become used on a worldwide scale: tools from notable companies in the geospatial field provide CityGML interfaces. Many applications and projects use this standard. CityGML is also having a strong impact on science: numerous approaches use CityGML, particularly its semantics, for disaster management, emergency responses, or energy-related applications as well as for visualizations, or they contribute to CityGML, improving its consistency and validity, or use CityGML, particularly its different Levels-of-Detail, as a source or target for generalizations. This paper gives an overview of CityGML, its underlying concepts, its Levels-of-Detail, how to extend it, its applications, its likely future development, and the role it plays in scientific research. Furthermore, its

  4. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.

    PubMed

    Do, Phuong T; Moreland, John R

    2014-04-01

    The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.

  5. Human microbiome visualization using 3D technology.

    PubMed

    Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C

    2011-01-01

    High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.

  6. Use of 3D techniques for virtual production

    NASA Astrophysics Data System (ADS)

    Grau, Oliver; Price, Marc C.; Thomas, Graham A.

    2000-12-01

    Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.

  7. Mental Representation of Spatial Cues During Spaceflight (3D-SPACE)

    NASA Astrophysics Data System (ADS)

    Clement, Gilles; Lathan, Corinna; Skinner, Anna; Lorigny, Eric

    2008-06-01

    The 3D-SPACE experiment is a joint effort between ESA and NASA to develop a simple virtual reality platform to enable astronauts to complete a series of tests while aboard the International Space Station (ISS). These tests will provide insights into the effects of the space environment on: (a) depth perception, by presenting 2D geometric illusions and 3D objects that subjects adjust with a finger trackball; (b) distance perception, by presenting natural or computer-generated 3D scenes where subjects estimate and report absolute distances or adjust distances; and (c) handwriting/drawing, by analyzing trajectories and velocities when subjects write or draw memorized objects with an electronic pen on a digitizing tablet. The objective of these tasks is to identify problems associated with 3D perception in astronauts with the goal of developing countermeasures to alleviate any associated performance risks. The equipment has been uploaded to the ISS in April 2008, and the first measurements should take place during Increment 17.

  8. How Young Children and Chimpanzees ("Pan Troglodytes") Perceive Objects in a 2D Display: Putting an Assumption to the Test

    ERIC Educational Resources Information Center

    Leighty, Katherine A.; Menzel, Charles R.; Fragaszy, Dorothy M.

    2008-01-01

    Object recognition research is typically conducted using 2D stimuli in lieu of 3D objects. This study investigated the amount and complexity of knowledge gained from 2D stimuli in adult chimpanzees ("Pan troglodytes") and young children (aged 3 and 4 years) using a titrated series of cross-dimensional search tasks. Results indicate that 3-year-old…

  9. Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.

    PubMed

    Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti

    2017-05-10

    Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two

  10. 3D Geo: An Alternative Approach

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.

    2016-10-01

    The expression GEO is mostly used to denote relation to the earth. However it should not be confined to what is related to the earth's surface, as other objects also need three dimensional representation and documentation, like cultural heritage objects. They include both tangible and intangible ones. In this paper the 3D data acquisition and 3D modelling of cultural heritage assets are briefly described and their significance is also highlighted. Moreover the organization of such information, related to monuments and artefacts, into relational data bases and its use for various purposes, other than just geometric documentation is also described and presented. In order to help the reader understand the above, several characteristic examples are presented and their methodology explained and their results evaluated.

  11. 3D abnormal behavior recognition in power generation

    NASA Astrophysics Data System (ADS)

    Wei, Zhenhua; Li, Xuesen; Su, Jie; Lin, Jie

    2011-06-01

    So far most research of human behavior recognition focus on simple individual behavior, such as wave, crouch, jump and bend. This paper will focus on abnormal behavior with objects carrying in power generation. Such as using mobile communication device in main control room, taking helmet off during working and lying down in high place. Taking account of the color and shape are fixed, we adopted edge detecting by color tracking to recognize object in worker. This paper introduces a method, which using geometric character of skeleton and its angle to express sequence of three-dimensional human behavior data. Then adopting Semi-join critical step Hidden Markov Model, weighing probability of critical steps' output to reduce the computational complexity. Training model for every behavior, mean while select some skeleton frames from 3D behavior sample to form a critical step set. This set is a bridge linking 2D observation behavior with 3D human joints feature. The 3D reconstruction is not required during the 2D behavior recognition phase. In the beginning of recognition progress, finding the best match for every frame of 2D observed sample in 3D skeleton set. After that, 2D observed skeleton frames sample will be identified as a specifically 3D behavior by behavior-classifier. The effectiveness of the proposed algorithm is demonstrated with experiments in similar power generation environment.

  12. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  13. Precise stacking of decellularized extracellular matrix based 3D cell-laden constructs by a 3D cell printing system equipped with heating modules.

    PubMed

    Ahn, Geunseon; Min, Kyung-Hyun; Kim, Changhwan; Lee, Jeong-Seok; Kang, Donggu; Won, Joo-Yun; Cho, Dong-Woo; Kim, Jun-Young; Jin, Songwan; Yun, Won-Soo; Shim, Jin-Hyung

    2017-08-17

    Three-dimensional (3D) cell printing systems allow the controlled and precise deposition of multiple cells in 3D constructs. Hydrogel materials have been used extensively as printable bioinks owing to their ability to safely encapsulate living cells. However, hydrogel-based bioinks have drawbacks for cell printing, e.g. inappropriate crosslinking and liquid-like rheological properties, which hinder precise 3D shaping. Therefore, in this study, we investigated the influence of various factors (e.g. bioink concentration, viscosity, and extent of crosslinking) on cell printing and established a new 3D cell printing system equipped with heating modules for the precise stacking of decellularized extracellular matrix (dECM)-based 3D cell-laden constructs. Because the pH-adjusted bioink isolated from native tissue is safely gelled at 37 °C, our heating system facilitated the precise stacking of dECM bioinks by enabling simultaneous gelation during printing. We observed greater printability compared with that of a non-heating system. These results were confirmed by mechanical testing and 3D construct stacking analyses. We also confirmed that our heating system did not elicit negative effects, such as cell death, in the printed cells. Conclusively, these results hold promise for the application of 3D bioprinting to tissue engineering and drug development.

  14. 3D displacement field measurement with correlation based on the micro-geometrical surface texture

    NASA Astrophysics Data System (ADS)

    Bubaker-Isheil, Halima; Serri, Jérôme; Fontaine, Jean-François

    2011-07-01

    Image correlation methods are widely used in experimental mechanics to obtain displacement field measurements. Currently, these methods are applied using digital images of the initial and deformed surfaces sprayed with black or white paint. Speckle patterns are then captured and the correlation is performed with a high degree of accuracy to an order of 0.01 pixels. In 3D, however, stereo-correlation leads to a lower degree of accuracy. Correlation techniques are based on the search for a sub-image (or pattern) displacement field. The work presented in this paper introduces a new correlation-based approach for 3D displacement field measurement that uses an additional 3D laser scanner and a CMM (Coordinate Measurement Machine). Unlike most existing methods that require the presence of markers on the observed object (such as black speckle, grids or random patterns), this approach relies solely on micro-geometrical surface textures such as waviness, roughness and aperiodic random defects. The latter are assumed to remain sufficiently small thus providing an adequate estimate of the particle displacement. The proposed approach can be used in a wide range of applications such as sheet metal forming with large strains. The method proceeds by first obtaining cloud points using the 3D laser scanner mounted on a CMM. These points are used to create 2D maps that are then correlated. In this respect, various criteria have been investigated for creating maps consisting of patterns, which facilitate the correlation procedure. Once the maps are created, the correlation between both configurations (initial and moved) is carried out using traditional methods developed for field measurements. Measurement validation was conducted using experiments in 2D and 3D with good results for rigid displacements in 2D, 3D and 2D rotations.

  15. 7 CFR 1778.3 - Objective.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Objective. 1778.3 Section 1778.3 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.3 Objective. The objective of the...

  16. 2D approaches to 3D watermarking: state-of-the-art and perspectives

    NASA Astrophysics Data System (ADS)

    Mitrea, M.; Duţă, S.; Prêteux, F.

    2006-02-01

    With the advent of the Information Society, video, audio, speech, and 3D media represent the source of huge economic benefits. Consequently, there is a continuously increasing demand for protecting their related intellectual property rights. The solution can be provided by robust watermarking, a research field which exploded in the last 7 years. However, the largest part of the scientific effort was devoted to video and audio protection, the 3D objects being quite neglected. In the absence of any standardisation attempt, the paper starts by summarising the approaches developed in this respect and by further identifying the main challenges to be addressed in the next years. Then, it describes an original oblivious watermarking method devoted to the protection of the 3D objects represented by NURBS (Non uniform Rational B Spline) surfaces. Applied to both free form objects and CAD models, the method exhibited very good transparency (no visible differences between the marked and the unmarked model) and robustness (with respect to both traditional attacks and to NURBS processing).

  17. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  18. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  19. FROM2D to 3d Supervised Segmentation and Classification for Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Dininno, D.; Petrucci, G.; Remondino, F.

    2018-05-01

    The digital management of architectural heritage information is still a complex problem, as a heritage object requires an integrated representation of various types of information in order to develop appropriate restoration or conservation strategies. Currently, there is extensive research focused on automatic procedures of segmentation and classification of 3D point clouds or meshes, which can accelerate the study of a monument and integrate it with heterogeneous information and attributes, useful to characterize and describe the surveyed object. The aim of this study is to propose an optimal, repeatable and reliable procedure to manage various types of 3D surveying data and associate them with heterogeneous information and attributes to characterize and describe the surveyed object. In particular, this paper presents an approach for classifying 3D heritage models, starting from the segmentation of their textures based on supervised machine learning methods. Experimental results run on three different case studies demonstrate that the proposed approach is effective and with many further potentials.

  20. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  1. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  2. 3D Printing: How Much Will It Improve the DoD Supply Chain of the Future

    DTIC Science & Technology

    2014-06-01

    Defense AT&L: May–June 2014 6 3D Printing How Much Will It Improve the DoD Supply Chain of the Future? Robin Brown Jim Davis Mark Dobson...so? DoD Enters the 3D Printing Arena First let’s set the stage by defining 3D printing . To put it sim- ply, 3D printing is a manufacturing process in...where the object is built up from scratch, which is why 3D printing is also referred to as “additive manufacturing.” This process is the opposite of the

  3. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  4. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  5. Hands-On Data Analysis: Using 3D Printing to Visualize Reaction Progress Surfaces

    ERIC Educational Resources Information Center

    Higman, Carolyn S.; Situ, Henry; Blacklin, Peter; Hein, Jason E.

    2017-01-01

    Advances in 3D printing technology over the past decade have led to its expansion into all subfields of science, including chemistry. This technology provides useful teaching tools that facilitate communication of difficult chemical concepts to students and researchers. Presented here is the use of 3D printing technology to create tangible models…

  6. Design of 3D simulation engine for oilfield safety training

    NASA Astrophysics Data System (ADS)

    Li, Hua-Ming; Kang, Bao-Sheng

    2015-03-01

    Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.

  7. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  9. 3-D lithium ion microbattery

    NASA Astrophysics Data System (ADS)

    Yeh, Yuting

    The lithium-ion battery has emerged as a common power source for portable consumer electronics since its debut two decades ago. Due to the low atomic weight and high electrochemical activity of lithium chemistry, lithium-ion battery has a higher energy density as compared to other battery systems, such as Ni-Cd, Ni-MH, and lead-acid batteries. As a result, use of lithium-ion batteries enables the size of batteries to be effectively reduced without compromising capacity. More importantly, as battery size is reduced, it enhances the applications of portable electronics, increasing the convenience of use. The 3-D battery architecture described in the dissertation is believed to be a new paradigm for future batteries. The architecture features coupled 3-D electrodes to provide better charge/discharge kinetics and a higher charge capacity per footprint area. The overarching objective of this dissertation is to implement the 3-D architecture using the lithium-ion chemistry. The 3-D lithium-ion batteries are designed to provide high areal energy density without compromising power density. The dissertation is comprised of four interrelated sections. First, a simulation was conducted to identify key battery parameters and to define an ideal three-dimensional cell structure. The second part of the research involved identifying fabrication routes to build the 3-D electrode, which was the key design element in the 3-D paradigm. The third part of the dissertation was to correlate the electrode performance with its geometric features. In particular, the influence of aspect ratio was investigated. Lastly, an electrolyte/separator was designed and fabricated based on the existing 3-D electrode configuration. This enabled 3-D battery to be assembled.

  10. SMART Security Cooperation Objectives: Improving DoD Planning and Guidance

    DTIC Science & Technology

    2016-01-01

    integrate them into a system for assessing, monitoring, and evaluating security cooperation programs and activities. This report evaluates DoD’s...effectiveness in developing SMART security coopera- tion objectives that facilitate assessment, monitoring, and evaluation . It also proposes a systematic...Cooperation Ends, Ways, and Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 RAND Evaluation and Revision of Selected

  11. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  12. Precipitation Processes Developed During ARM (1997), TOGA COARE (1992) GATE (1974), SCSMEX (1998), and KWAJEX (1999): Consistent 3D, Semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D) have been used to study the response of clouds to large-scale forcing. IN these 3D simulators, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical clouds systems with large horizontal domains at the National Center of Atmospheric Research (NCAR) and at NASA Goddard Space Center. At Goddard, a 3D cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE, GATE, SCSMEX, ARM, and KWAJEX using a 512 by 512 km domain (with 2-km resolution). The result indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D GCE model simulation. The major objective of this paper are: (1) to assess the performance of the super-parametrization technique, (2) calculate and examine the surface energy (especially radiation) and water budget, and (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  13. Social interaction facilitates word learning in preverbal infants: Word-object mapping and word segmentation.

    PubMed

    Hakuno, Yoko; Omori, Takahide; Yamamoto, Jun-Ichi; Minagawa, Yasuyo

    2017-08-01

    In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word-object mapping remains elusive. We tested whether infants aged 5-6 months and 9-10 months could segment a word from continuous speech and acquire a word-object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word-object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Digital holographic 3D imaging spectrometry (a review)

    NASA Astrophysics Data System (ADS)

    Yoshimori, Kyu

    2017-09-01

    This paper reviews recent progress in the digital holographic 3D imaging spectrometry. The principle of this method is a marriage of incoherent holography and Fourier transform spectroscopy. Review includes principle, procedure of signal processing and experimental results to obtain a multispectral set of 3D images for spatially incoherent, polychromatic objects.

  15. Estimation of 3D shape from image orientations.

    PubMed

    Fleming, Roland W; Holtmann-Rice, Daniel; Bülthoff, Heinrich H

    2011-12-20

    One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by "smearing" (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.

  16. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  17. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. Free and open-source automated 3-D microscope.

    PubMed

    Wijnen, Bas; Petersen, Emily E; Hunt, Emily J; Pearce, Joshua M

    2016-11-01

    Open-source technology not only has facilitated the expansion of the greater research community, but by lowering costs it has encouraged innovation and customizable design. The field of automated microscopy has continued to be a challenge in accessibility due the expense and inflexible, noninterchangeable stages. This paper presents a low-cost, open-source microscope 3-D stage. A RepRap 3-D printer was converted to an optical microscope equipped with a customized, 3-D printed holder for a USB microscope. Precision measurements were determined to have an average error of 10 μm at the maximum speed and 27 μm at the minimum recorded speed. Accuracy tests yielded an error of 0.15%. The machine is a true 3-D stage and thus able to operate with USB microscopes or conventional desktop microscopes. It is larger than all commercial alternatives, and is thus capable of high-depth images over unprecedented areas and complex geometries. The repeatability is below 2-D microscope stages, but testing shows that it is adequate for the majority of scientific applications. The open-source microscope stage costs less than 3-9% of the closest proprietary commercial stages. This extreme affordability vastly improves accessibility for 3-D microscopy throughout the world. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  20. 3D treatment planning systems.

    PubMed

    Saw, Cheng B; Li, Sicong

    2018-01-01

    Three-dimensional (3D) treatment planning systems have evolved and become crucial components of modern radiation therapy. The systems are computer-aided designing or planning softwares that speed up the treatment planning processes to arrive at the best dose plans for the patients undergoing radiation therapy. Furthermore, the systems provide new technology to solve problems that would not have been considered without the use of computers such as conformal radiation therapy (CRT), intensity-modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). The 3D treatment planning systems vary amongst the vendors and also the dose delivery systems they are designed to support. As such these systems have different planning tools to generate the treatment plans and convert the treatment plans into executable instructions that can be implemented by the dose delivery systems. The rapid advancements in computer technology and accelerators have facilitated constant upgrades and the introduction of different and unique dose delivery systems than the traditional C-arm type medical linear accelerators. The focus of this special issue is to gather relevant 3D treatment planning systems for the radiation oncology community to keep abreast of technology advancement by assess the planning tools available as well as those unique "tricks or tips" used to support the different dose delivery systems. Copyright © 2018 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  1. Improved 3D live-wire method with application to 3D CT chest image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Higgins, William E.

    2006-03-01

    The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.

  2. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  3. Three-dimensional (3D) printing and its applications for aortic diseases.

    PubMed

    Hangge, Patrick; Pershad, Yash; Witting, Avery A; Albadawi, Hassan; Oklu, Rahmi

    2018-04-01

    Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases.

  4. Three-dimensional (3D) printing and its applications for aortic diseases

    PubMed Central

    Hangge, Patrick; Pershad, Yash; Witting, Avery A.; Albadawi, Hassan

    2018-01-01

    Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases. PMID:29850416

  5. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  6. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  7. 3D Printing of CT Dataset: Validation of an Open Source and Consumer-Available Workflow.

    PubMed

    Bortolotto, Chandra; Eshja, Esmeralda; Peroni, Caterina; Orlandi, Matteo A; Bizzotto, Nicola; Poggi, Paolo

    2016-02-01

    The broad availability of cheap three-dimensional (3D) printing equipment has raised the need for a thorough analysis on its effects on clinical accuracy. Our aim is to determine whether the accuracy of 3D printing process is affected by the use of a low-budget workflow based on open source software and consumer's commercially available 3D printers. A group of test objects was scanned with a 64-slice computed tomography (CT) in order to build their 3D copies. CT datasets were elaborated using a software chain based on three free and open source software. Objects were printed out with a commercially available 3D printer. Both the 3D copies and the test objects were measured using a digital professional caliper. Overall, the objects' mean absolute difference between test objects and 3D copies is 0.23 mm and the mean relative difference amounts to 0.55 %. Our results demonstrate that the accuracy of 3D printing process remains high despite the use of a low-budget workflow.

  8. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Ting; Kim, Sung; Goyal, Sharad

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  9. 3-D video techniques in endoscopic surgery.

    PubMed

    Becker, H; Melzer, A; Schurr, M O; Buess, G

    1993-02-01

    Three-dimensional visualisation of the operative field is an important requisite for precise and fast handling of open surgical operations. Up to now it has only been possible to display a two-dimensional image on the monitor during endoscopic procedures. The increasing complexity of minimal invasive interventions requires endoscopic suturing and ligatures of larger vessels which are difficult to perform without the impression of space. Three-dimensional vision therefore may decrease the operative risk, accelerate interventions and widen the operative spectrum. In April 1992 a 3-D video system developed at the Nuclear Research Center Karlsruhe, Germany (IAI Institute) was applied in various animal experimental procedures and clinically in laparoscopic cholecystectomy. The system works with a single monitor and active high-speed shutter glasses. Our first trials with this new 3-D imaging system clearly showed a facilitation of complex surgical manoeuvres like mobilisation of organs, preparation in the deep space and suture techniques. The 3-D-system introduced in this article will enter the market in 1993 (Opticon Co., Karlsruhe, Germany.

  10. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  11. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. 3D printing from MRI Data: Harnessing strengths and minimizing weaknesses.

    PubMed

    Ripley, Beth; Levin, Dmitry; Kelil, Tatiana; Hermsen, Joshua L; Kim, Sooah; Maki, Jeffrey H; Wilson, Gregory J

    2017-03-01

    3D printing facilitates the creation of accurate physical models of patient-specific anatomy from medical imaging datasets. While the majority of models to date are created from computed tomography (CT) data, there is increasing interest in creating models from other datasets, such as ultrasound and magnetic resonance imaging (MRI). MRI, in particular, holds great potential for 3D printing, given its excellent tissue characterization and lack of ionizing radiation. There are, however, challenges to 3D printing from MRI data as well. Here we review the basics of 3D printing, explore the current strengths and weaknesses of printing from MRI data as they pertain to model accuracy, and discuss considerations in the design of MRI sequences for 3D printing. Finally, we explore the future of 3D printing and MRI, including creative applications and new materials. 5 J. Magn. Reson. Imaging 2017;45:635-645. © 2016 International Society for Magnetic Resonance in Medicine.

  13. NASA VERVE: Interactive 3D Visualization Within Eclipse

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar; Allan, Mark B.

    2014-01-01

    At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.

  14. From molecular to macroscopic via the rational design of a self-assembled 3D DNA crystal.

    PubMed

    Zheng, Jianping; Birktoft, Jens J; Chen, Yi; Wang, Tong; Sha, Ruojie; Constantinou, Pamela E; Ginell, Stephan L; Mao, Chengde; Seeman, Nadrian C

    2009-09-03

    We live in a macroscopic three-dimensional (3D) world, but our best description of the structure of matter is at the atomic and molecular scale. Understanding the relationship between the two scales requires a bridge from the molecular world to the macroscopic world. Connecting these two domains with atomic precision is a central goal of the natural sciences, but it requires high spatial control of the 3D structure of matter. The simplest practical route to producing precisely designed 3D macroscopic objects is to form a crystalline arrangement by self-assembly, because such a periodic array has only conceptually simple requirements: a motif that has a robust 3D structure, dominant affinity interactions between parts of the motif when it self-associates, and predictable structures for these affinity interactions. Fulfilling these three criteria to produce a 3D periodic system is not easy, but should readily be achieved with well-structured branched DNA motifs tailed by sticky ends. Complementary sticky ends associate with each other preferentially and assume the well-known B-DNA structure when they do so; the helically repeating nature of DNA facilitates the construction of a periodic array. It is essential that the directions of propagation associated with the sticky ends do not share the same plane, but extend to form a 3D arrangement of matter. Here we report the crystal structure at 4 A resolution of a designed, self-assembled, 3D crystal based on the DNA tensegrity triangle. The data demonstrate clearly that it is possible to design and self-assemble a well-ordered macromolecular 3D crystalline lattice with precise control.

  15. Precipitation Processes developed during ARM (1997), TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999), Consistent 2D, semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique (i.e. is 2D or semi-3D CRM appropriate for the super-parameterization?); (2) calculate and examine the surface energy (especially radiation) and water budgets; (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  16. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  17. a Proposal for Generalization of 3d Models

    NASA Astrophysics Data System (ADS)

    Uyar, A.; Ulugtekin, N. N.

    2017-11-01

    In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  18. Systems in Development: Motor Skill Acquisition Facilitates Three-Dimensional Object Completion

    ERIC Educational Resources Information Center

    Soska, Kasey C.; Adolph, Karen E.; Johnson, Scott P.

    2010-01-01

    How do infants learn to perceive the backs of objects that they see only from a limited viewpoint? Infants' 3-dimensional object completion abilities emerge in conjunction with developing motor skills--independent sitting and visual-manual exploration. Infants at 4.5 to 7.5 months of age (n = 28) were habituated to a limited-view object and tested…

  19. 3D printing: making things at the library.

    PubMed

    Hoy, Matthew B

    2013-01-01

    3D printers are a new technology that creates physical objects from digital files. Uses for these printers include printing models, parts, and toys. 3D printers are also being developed for medical applications, including printed bone, skin, and even complete organs. Although medical printing lags behind other uses for 3D printing, it has the potential to radically change the practice of medicine over the next decade. Falling costs for hardware have made 3D printers an inexpensive technology that libraries can offer their patrons. Medical librarians will want to be familiar with this technology, as it is sure to have wide-reaching effects on the practice of medicine.

  20. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  1. Parallel CARLOS-3D code development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions tomore » the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.« less

  2. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    PubMed

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-02

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Sparse aperture 3D passive image sensing and recognition

    NASA Astrophysics Data System (ADS)

    Daneshpanah, Mehdi

    The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results

  4. 3D printing in chemistry: past, present and future

    NASA Astrophysics Data System (ADS)

    Shatford, Ryan; Karanassios, Vassili

    2016-05-01

    During the last years, 3d printing for rapid prototyping using additive manufacturing has been receiving increased attention in the technical and scientific literature including some Chemistry-related journals. Furthermore, 3D printing technology (defining size and resolution of 3D objects) and properties of printed materials (e.g., strength, resistance to chemical attack, electrical insulation) proved to be important for chemistry-related applications. In this paper these are discussed in detail. In addition, application of 3D printing for development of Micro Plasma Devices (MPDs) is discussed and 2d-profilometry data of a 3D printed surfaces is reported. And, past and present chemistry and bio-related applications of 3D printing are reviewed and possible future directions are postulated.

  5. Assessment of 3D Models Used in Contours Studies

    ERIC Educational Resources Information Center

    Alvarez, F. J. Ayala; Parra, E. B. Blazquez; Tubio, F. Montes

    2015-01-01

    This paper presents an experimental research focusing on the view of first year students. The aim is to check the quality of implementing 3D models integrated in the curriculum. We search to determine students' preference between the various means facilitated in order to understand the given subject. Students have been respondents to prove the…

  6. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  7. Signal and Noise in 3D Environments

    DTIC Science & Technology

    2015-09-30

    complicated 3D environments. I have also been doing a great deal of work in modeling the noise field (the ocean soundscape ) due to various sources... soundscape to learn about the ocean environment. I distinguish this from geoacoustic inversion and ocean tomography, in that the methods envisioned will rely...on broader features of the soundscape . OBJECTIVES In the first phase of this effort we will focus on the 3D modeling solutions, documenting the

  8. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  9. Use of 3D reconstruction cloacagrams and 3D printing in cloacal malformations.

    PubMed

    Ahn, Jennifer J; Shnorhavorian, Margarett; Amies Oelschlager, Anne-Marie E; Ripley, Beth; Shivaram, Giridhar M; Avansino, Jeffrey R; Merguerian, Paul A

    2017-08-01

    Cloacal anomalies are complex to manage, and the anatomy affects prognosis and management. Assessment historically includes examination under anesthesia, and genitography is often performed, but these do not consistently capture three-dimensional (3D) detail or spatial relationships of the anatomic structures. Three-dimensional reconstruction cloacagrams can provide a high level of detail including channel measurements and the level of the cloaca (<3 cm vs. >3 cm), which typically determines the approach for surgical reconstruction and can impact long-term prognosis. Yet this imaging modality has not yet been directly compared with intra-operative or endoscopic findings. Our objective was to compare 3D reconstruction cloacagrams with endoscopic and intraoperative findings, as well as to describe the use of 3D printing to create models for surgical planning and education. An IRB-approved retrospective review of all cloaca patients seen by our multi-disciplinary program from 2014 to 2016 was performed. All patients underwent examination under anesthesia, endoscopy, 3D reconstruction cloacagram, and subsequent reconstructive surgery at a later date. Patient characteristics, intraoperative details, and measurements from endoscopy and cloacagram were reviewed and compared. One of the 3D cloacagrams was reformatted for 3D printing to create a model for surgical planning. Four patients were included for review, with the Figure illustrating 3D cloacagram results. Measurements of common channel length and urethral length were similar between modalities, particularly with confirming the level of cloaca. No patient experienced any complications or adverse effects from cloacagram or endoscopy. A model was successfully created from cloacagram images with the use of 3D printing technology. Accurate preoperative assessment for cloacal anomalies is important for counseling and surgical planning. Three-dimensional cloacagrams have been shown to yield a high level of anatomic

  10. 3D Cell Printing of Functional Skeletal Muscle Constructs Using Skeletal Muscle-Derived Bioink.

    PubMed

    Choi, Yeong-Jin; Kim, Taek Gyoung; Jeong, Jonghyeon; Yi, Hee-Gyeong; Park, Ji Won; Hwang, Woonbong; Cho, Dong-Woo

    2016-10-01

    Engineered skeletal muscle tissues that mimic the structure and function of native muscle have been considered as an alternative strategy for the treatment of various muscular diseases and injuries. Here, it is demonstrated that 3D cell-printing of decellularized skeletal muscle extracellular matrix (mdECM)-based bioink facilitates the fabrication of functional skeletal muscle constructs. The cellular alignment and the shape of the tissue constructs are controlled by 3D cell-printing technology. mdECM bioink provides the 3D cell-printed muscle constructs with a myogenic environment that supports high viability and contractility as well as myotube formation, differentiation, and maturation. More interestingly, the preservation of agrin is confirmed in the mdECM, and significant increases in the formation of acetylcholine receptor clusters are exhibited in the 3D cell-printed muscle constructs. In conclusion, mdECM bioink and 3D cell-printing technology facilitate the mimicking of both the structural and functional properties of native muscle and hold great promise for producing clinically relevant engineered muscle for the treatment of muscular injuries. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Development of a 3D printer using scanning projection stereolithography

    PubMed Central

    Lee, Michael P.; Cooper, Geoffrey J. T.; Hinkley, Trevor; Gibson, Graham M.; Padgett, Miles J.; Cronin, Leroy

    2015-01-01

    We have developed a system for the rapid fabrication of low cost 3D devices and systems in the laboratory with micro-scale features yet cm-scale objects. Our system is inspired by maskless lithography, where a digital micromirror device (DMD) is used to project patterns with resolution up to 10 µm onto a layer of photoresist. Large area objects can be fabricated by stitching projected images over a 5cm2 area. The addition of a z-stage allows multiple layers to be stacked to create 3D objects, removing the need for any developing or etching steps but at the same time leading to true 3D devices which are robust, configurable and scalable. We demonstrate the applications of the system by printing a range of micro-scale objects as well as a fully functioning microfluidic droplet device and test its integrity by pumping dye through the channels. PMID:25906401

  12. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  13. surf3d: A 3-D finite-element program for the analysis of surface and corner cracks in solids subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1993-01-01

    A computer program, surf3d, that uses the 3D finite-element method to calculate the stress-intensity factors for surface, corner, and embedded cracks in finite-thickness plates with and without circular holes, was developed. The cracks are assumed to be either elliptic or part eliptic in shape. The computer program uses eight-noded hexahedral elements to model the solid. The program uses a skyline storage and solver. The stress-intensity factors are evaluated using the force method, the crack-opening displacement method, and the 3-D virtual crack closure methods. In the manual the input to and the output of the surf3d program are described. This manual also demonstrates the use of the program and describes the calculation of the stress-intensity factors. Several examples with sample data files are included with the manual. To facilitate modeling of the user's crack configuration and loading, a companion program (a preprocessor program) that generates the data for the surf3d called gensurf was also developed. The gensurf program is a three dimensional mesh generator program that requires minimal input and that builds a complete data file for surf3d. The program surf3d is operational on Unix machines such as CRAY Y-MP, CRAY-2, and Convex C-220.

  14. Surface functionalization of 3D-printed plastics via initiated chemical vapor deposition

    PubMed Central

    Cheng, Christine

    2017-01-01

    3D printing is a useful fabrication technique because it offers design flexibility and rapid prototyping. The ability to functionalize the surfaces of 3D-printed objects allows the bulk properties, such as material strength or printability, to be chosen separately from surface properties, which is critical to expanding the breadth of 3D printing applications. In this work, we studied the ability of the initiated chemical vapor deposition (iCVD) process to coat 3D-printed shapes composed of poly(lactic acid) and acrylonitrile butadiene styrene. The thermally insulating properties of 3D-printed plastics pose a challenge to the iCVD process due to large thermal gradients along the structures during processing. In this study, processing parameters such as the substrate temperature and the filament temperature were systematically varied to understand how these parameters affect the uniformity of the coatings along the 3D-printed objects. The 3D-printed objects were coated with both hydrophobic and hydrophilic polymers. Contact angle goniometry and X-ray photoelectron spectroscopy were used to characterize the functionalized surfaces. Our results can enable the use of iCVD to functionalize 3D-printed materials for a range of applications such as tissue scaffolds and microfluidics. PMID:28875099

  15. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  16. Actuator-Assisted Calibration of Freehand 3D Ultrasound System.

    PubMed

    Koo, Terry K; Silvia, Nathaniel

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.

  17. Actuator-Assisted Calibration of Freehand 3D Ultrasound System

    PubMed Central

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371

  18. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    USGS Publications Warehouse

    Haas, Kevin A.; Warner, John C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales.

  19. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    USGS Publications Warehouse

    Haas, K.A.; Warner, J.C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales. ?? 2008 Elsevier Ltd.

  20. Exploiting Mirrors in 3d Reconstruction of Small Artefacts

    NASA Astrophysics Data System (ADS)

    Kontogianni, G.; Thomaidis, A. T.; Chliverou, R.; Georgopoulos, A.

    2018-05-01

    3D reconstruction of small artefacts is very significant in order to capture the details of the whole object irrespective of the documentation method which is used (Ranged Based or Image Based). Sometimes it is very difficult to achieve it because of hidden parts, occlusions, and obstructions which the object has. Hence, more data are necessary in order to 3D digitise the whole of the artefact leading to increased time for collecting and consequently processing the data. A methodology is necessary in order to reduce the collection of the data and therefore their processing time especially in cases of mass digitisation. So in this paper, the use of mirrors in particular high-quality mirrors in the data acquisition phase for the 3D reconstruction of small artefacts is investigated. Two case studies of 3D reconstruction are presented: the first one concerns Range-Based modelling especially a Time of Flight laser scanner is utilised and in the second one Image-Based modelling technique is implemented.

  1. High-resolution mobile optical 3D scanner with color mapping

    NASA Astrophysics Data System (ADS)

    Ramm, Roland; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-07-01

    A high-resolution mobile handheld scanning device suitable for 3D data acquisition and analysis for forensic investigations, rapid prototyping, design, quality management, and archaeology with a measurement volume of approximately 325 mm x 200 mm x 100mm and a lateral object resolution of 170 µm developed at our institute is introduced. The scanners weight is 4.4 kg with an optional color DLSR camera. The PC for measurement control and point calculation is included inside the housing. Power supply is realized by rechargeable batteries. Possible operation time is between 30 and 60 minutes. The object distance is between 400 and 500 mm, and the scan time for one 3D shot may vary between 0.1 and 0.5 seconds. The complete 3D result is obtained a few seconds after starting the scan. For higher quality 3D and color images the scanner is attachable to tripod use. Measurement objects larger than the measurement volume must be acquired partly. The different resulting datasets are merged using a suitable software module. The scanner has been successfully used in various applications.

  2. Correlation and 3D-tracking of objects by pointing sensors

    DOEpatents

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  3. Numerical study on 3D composite morphing actuators

    NASA Astrophysics Data System (ADS)

    Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru

    2015-04-01

    There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.

  4. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  5. 3D Printing of Shape Memory Polymers for Flexible Electronic Devices.

    PubMed

    Zarek, Matt; Layani, Michael; Cooperstein, Ido; Sachyani, Ela; Cohn, Daniel; Magdassi, Shlomo

    2016-06-01

    The formation of 3D objects composed of shape memory polymers for flexible electronics is described. Layer-by-layer photopolymerization of methacrylated semicrystalline molten macromonomers by a 3D digital light processing printer enables rapid fabrication of complex objects and imparts shape memory functionality for electrical circuits. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Advancing the field of 3D biomaterial printing.

    PubMed

    Jakus, Adam E; Rutz, Alexandra L; Shah, Ramille N

    2016-01-11

    3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissue- and organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications.

  7. 3D modeling of underground objects with the use of SLAM technology on the example of historical mine in Ciechanowice (Ołowiane Range, The Sudetes)

    NASA Astrophysics Data System (ADS)

    Wajs, Jaroslaw; Kasza, Damian; Zagożdżon, Paweł P.; Zagożdżon, Katarzyna D.

    2018-01-01

    Terrestrial Laser Scanning is a currently one of the most popular methods for producing representations of 3D objects. This paper presents the potential of applying the mobile laser scanning method to inventory underground objects. The examined location was a historic crystalline limestone mine situated in the vicinity of Ciechanowice village (Kaczawa Mts., SW Poland). The authors present a methodology for performing measurements and for processing the obtained results, whose accuracy is additionally verified.

  8. 3-D vision and figure-ground separation by visual cortex.

    PubMed

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with

  9. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  10. 3D Cell Printed Tissue Analogues: A New Platform for Theranostics

    PubMed Central

    Choi, Yeong-Jin; Yi, Hee-Gyeong; Kim, Seok-Won; Cho, Dong-Woo

    2017-01-01

    Stem cell theranostics has received much attention for noninvasively monitoring and tracing transplanted therapeutic stem cells through imaging agents and imaging modalities. Despite the excellent regenerative capability of stem cells, their efficacy has been limited due to low cellular retention, low survival rate, and low engraftment after implantation. Three-dimensional (3D) cell printing provides stem cells with the similar architecture and microenvironment of the native tissue and facilitates the generation of a 3D tissue-like construct that exhibits remarkable regenerative capacity and functionality as well as enhanced cell viability. Thus, 3D cell printing can overcome the current concerns of stem cell therapy by delivering the 3D construct to the damaged site. Despite the advantages of 3D cell printing, the in vivo and in vitro tracking and monitoring of the performance of 3D cell printed tissue in a noninvasive and real-time manner have not been thoroughly studied. In this review, we explore the recent progress in 3D cell technology and its applications. Finally, we investigate their potential limitations and suggest future perspectives on 3D cell printing and stem cell theranostics. PMID:28839468

  11. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  12. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    NASA Technical Reports Server (NTRS)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  13. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  14. [The application progress of 3D printing technology in ophthalmology].

    PubMed

    Ji, Z K; Zhao, Y; Yu, S S; Zhao, H

    2018-01-11

    3D printing is a kind of technology that makes 3D models from computer-aided designs through additive manufacturing, in which successive layers of the material are deposited onto underlying layers to construct 3D objects. In recent years, 3D printing is gradually applied in the field of ophthalmology, such as the cornea, retina, orbital operation, ocular tumor radiotherapy, ocular implants and ophthalmology teaching. This article reviews the application status of 3D printing technology in the basic research and clinical treatment in ophthalmology. (Chin J Ophthalmol, 2018, 54: 72-76) .

  15. A reflection TIE system for 3D inspection of wafer structures

    NASA Astrophysics Data System (ADS)

    Yan, Yizhen; Qu, Weijuan; Yan, Lei; Wang, Zhaomin; Zhao, Hongying

    2017-10-01

    A reflection TIE system consisting of a reflecting microscope and a 4f relay system is presented in this paper, with which the transport of intensity equation (TIE) is applied to reconstruct the three-dimensional (3D) profile of opaque micro objects like wafer structures for 3D inspection. As the shape of an object can affect the phases of waves, the 3D information of the object can be easily acquired with the multiple phases at different refocusing planes. By electronically controlled refocusing, multi-focal images can be captured and used in solving TIE to obtain the phase and depth of the object. In order to validate the accuracy and efficiency of the proposed system, the phase and depth values of several samples are calculated, and the experimental results is presented to demonstrate the performance of the system.

  16. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  17. RealityConvert: a tool for preparing 3D models of biochemical structures for augmented and virtual reality.

    PubMed

    Borrel, Alexandre; Fourches, Denis

    2017-12-01

    There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    PubMed

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Shape: A 3D Modeling Tool for Astrophysics.

    PubMed

    Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus

    2011-04-01

    We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.

  20. Three-dimensional (3D) printed endovascular simulation models: a feasibility study.

    PubMed

    Mafeld, Sebastian; Nesbitt, Craig; McCaslin, James; Bagnall, Alan; Davey, Philip; Bose, Pentop; Williams, Rob

    2017-02-01

    Three-dimensional (3D) printing is a manufacturing process in which an object is created by specialist printers designed to print in additive layers to create a 3D object. Whilst there are initial promising medical applications of 3D printing, a lack of evidence to support its use remains a barrier for larger scale adoption into clinical practice. Endovascular virtual reality (VR) simulation plays an important role in the safe training of future endovascular practitioners, but existing VR models have disadvantages including cost and accessibility which could be addressed with 3D printing. This study sought to evaluate the feasibility of 3D printing an anatomically accurate human aorta for the purposes of endovascular training. A 3D printed model was successfully designed and printed and used for endovascular simulation. The stages of development and practical applications are described. Feedback from 96 physicians who answered a series of questions using a 5 point Likert scale is presented. Initial data supports the value of 3D printed endovascular models although further educational validation is required.

  1. 3D pancreatic carcinoma spheroids induce a matrix-rich, chemoresistant phenotype offering a better model for drug testing.

    PubMed

    Longati, Paola; Jia, Xiaohui; Eimer, Johannes; Wagman, Annika; Witt, Michael-Robin; Rehnmark, Stefan; Verbeke, Caroline; Toftgård, Rune; Löhr, Matthias; Heuchel, Rainer L

    2013-02-27

    Pancreatic ductal adenocarcinoma (PDAC) is the fourth most common cause of cancer related death. It is lethal in nearly all patients, due to an almost complete chemoresistance. Most if not all drugs that pass preclinical tests successfully, fail miserably in the patient. This raises the question whether traditional 2D cell culture is the correct tool for drug screening. The objective of this study is to develop a simple, high-throughput 3D model of human PDAC cell lines, and to explore mechanisms underlying the transition from 2D to 3D that might be responsible for chemoresistance. Several established human PDAC and a KPC mouse cell lines were tested, whereby Panc-1 was studied in more detail. 3D spheroid formation was facilitated with methylcellulose. Spheroids were studied morphologically, electron microscopically and by qRT-PCR for selected matrix genes, related factors and miRNA. Metabolic studies were performed, and a panel of novel drugs was tested against gemcitabine. Comparing 3D to 2D cell culture, matrix proteins were significantly increased as were lumican, SNED1, DARP32, and miR-146a. Cell metabolism in 3D was shifted towards glycolysis. All drugs tested were less effective in 3D, except for allicin, MT100 and AX, which demonstrated effect. We developed a high-throughput 3D cell culture drug screening system for pancreatic cancer, which displays a strongly increased chemoresistance. Features associated to the 3D cell model are increased expression of matrix proteins and miRNA as well as stromal markers such as PPP1R1B and SNED1. This is supporting the concept of cell adhesion mediated drug resistance.

  2. Remote gaze tracking system for 3D environments.

    PubMed

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  3. Refined 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Alday, Luis F.; Genolini, Pietro Benetti; Bullimore, Mathew; van Loon, Mark

    2017-04-01

    We explore aspects of the correspondence between Seifert 3-manifolds and 3d N = 2 supersymmetric theories with a distinguished abelian flavour symmetry. We give a prescription for computing the squashed three-sphere partition functions of such 3d N = 2 theories constructed from boundary conditions and interfaces in a 4d N = 2∗ theory, mirroring the construction of Seifert manifold invariants via Dehn surgery. This is extended to include links in the Seifert manifold by the insertion of supersymmetric Wilson-'t Hooft loops in the 4d N = 2∗ theory. In the presence of a mass parameter cfor the distinguished flavour symmetry, we recover aspects of refined Chern-Simons theory with complex gauge group, and in particular construct an analytic continuation of the S-matrix of refined Chern-Simons theory.

  4. Summary on several key techniques in 3D geological modeling.

    PubMed

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  5. Summary on Several Key Techniques in 3D Geological Modeling

    PubMed Central

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029

  6. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  7. A new chapter in pharmaceutical manufacturing: 3D-printed drug products.

    PubMed

    Norman, James; Madurawe, Rapti D; Moore, Christine M V; Khan, Mansoor A; Khairuzzaman, Akm

    2017-01-01

    FDA recently approved a 3D-printed drug product in August 2015, which is indicative of a new chapter for pharmaceutical manufacturing. This review article summarizes progress with 3D printed drug products and discusses process development for solid oral dosage forms. 3D printing is a layer-by-layer process capable of producing 3D drug products from digital designs. Traditional pharmaceutical processes, such as tablet compression, have been used for decades with established regulatory pathways. These processes are well understood, but antiquated in terms of process capability and manufacturing flexibility. 3D printing, as a platform technology, has competitive advantages for complex products, personalized products, and products made on-demand. These advantages create opportunities for improving the safety, efficacy, and accessibility of medicines. Although 3D printing differs from traditional manufacturing processes for solid oral dosage forms, risk-based process development is feasible. This review highlights how product and process understanding can facilitate the development of a control strategy for different 3D printing methods. Overall, the authors believe that the recent approval of a 3D printed drug product will stimulate continual innovation in pharmaceutical manufacturing technology. FDA encourages the development of advanced manufacturing technologies, including 3D-printing, using science- and risk-based approaches. Published by Elsevier B.V.

  8. A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation

    PubMed Central

    Sandhu, Romeil; Dambreville, Samuel; Yezzi, Anthony; Tannenbaum, Allen

    2013-01-01

    In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one’s training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. PMID:20733218

  9. The benefits of sensorimotor knowledge: body-object interaction facilitates semantic processing.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Sears, Christopher R; Wilson, Kim; Locheed, Keri; Owen, William J

    2008-04-05

    This article examined the effects of body-object interaction (BOI) on semantic processing. BOI measures perceptions of the ease with which a human body can physically interact with a word's referent. In Experiment 1, BOI effects were examined in 2 semantic categorization tasks (SCT) in which participants decided if words are easily imageable. Responses were faster and more accurate for high BOI words (e.g., mask) than for low BOI words (e.g., ship). In Experiment 2, BOI effects were examined in a semantic lexical decision task (SLDT), which taps both semantic feedback and semantic processing. The BOI effect was larger in the SLDT than in the SCT, suggesting that BOI facilitates both semantic feedback and semantic processing. The findings are consistent with the embodied cognition perspective (e.g., Barsalou's, 1999, Perceptual Symbols Theory), which proposes that sensorimotor interactions with the environment are incorporated in semantic knowledge. 2008 Cognitive Science Society, Inc.

  10. Potential of 3D City Models to assess flood vulnerability

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Bochow, Mathias; Schüttig, Martin; Nagel, Claus; Ross, Lutz; Kreibich, Heidi

    2016-04-01

    Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of

  11. Is phase measurement necessary for incoherent holographic 3D imaging?

    NASA Astrophysics Data System (ADS)

    Rosen, Joseph; Vijayakumar, A.; Rai, Mani Ratnam; Mukherjee, Saswata

    2018-02-01

    Incoherent digital holography can be used for several applications, among which are high resolution fluorescence microscopy and imaging through a scattering medium. Historically, an incoherent digital hologram has been usually recorded by self-interference systems in which both interfering beams are originated from the same observed object. The self-interference system enables to read the phase distribution of the wavefronts propagating from an object and consequently to decode the 3D location of the object points. In this presentation, we survey several cases in which 3D holographic imaging can be done without the phase information and without two-wave interference.

  12. A method for mandibular dental arch superimposition using 3D cone beam CT and orthodontic 3D digital model

    PubMed Central

    Park, Tae-Joon; Lee, Sang-Hyun

    2012-01-01

    Objective The purpose of this study was to develop superimposition method on the lower arch using 3-dimensional (3D) cone beam computed tomography (CBCT) images and orthodontic 3D digital modeling. Methods Integrated 3D CBCT images were acquired by substituting the dental portion of 3D CBCT images with precise dental images of an orthodontic 3D digital model. Images were acquired before and after treatment. For the superimposition, 2 superimposition methods were designed. Surface superimposition was based on the basal bone structure of the mandible by surface-to-surface matching (best-fit method). Plane superimposition was based on anatomical structures (mental and lingual foramen). For the evaluation, 10 landmarks including teeth and anatomic structures were assigned, and 30 times of superimpositions and measurements were performed to determine the more reproducible and reliable method. Results All landmarks demonstrated that the surface superimposition method produced relatively more consistent coordinate values. The mean distances of measured landmarks values from the means were statistically significantly lower with the surface superimpositions method. Conclusions Between the 2 superimposition methods designed for the evaluation of 3D changes in the lower arch, surface superimposition was the simpler, more reproducible, reliable method. PMID:23112948

  13. Infrared Time Lapse of World’s Largest 3D-Printed Object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Researchers at Oak Ridge National Laboratory have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  14. Comprehending 3D Diagrams: Sketching to Support Spatial Reasoning.

    PubMed

    Gagnier, Kristin M; Atit, Kinnari; Ormand, Carol J; Shipley, Thomas F

    2017-10-01

    Science, technology, engineering, and mathematics (STEM) disciplines commonly illustrate 3D relationships in diagrams, yet these are often challenging for students. Failing to understand diagrams can hinder success in STEM because scientific practice requires understanding and creating diagrammatic representations. We explore a new approach to improving student understanding of diagrams that convey 3D relations that is based on students generating their own predictive diagrams. Participants' comprehension of 3D spatial diagrams was measured in a pre- and post-design where students selected the correct 2D slice through 3D geologic block diagrams. Generating sketches that predicated the internal structure of a model led to greater improvement in diagram understanding than visualizing the interior of the model without sketching, or sketching the model without attempting to predict unseen spatial relations. In addition, we found a positive correlation between sketched diagram accuracy and improvement on the diagram comprehension measure. Results suggest that generating a predictive diagram facilitates students' abilities to make inferences about spatial relationships in diagrams. Implications for use of sketching in supporting STEM learning are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  15. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  16. Streamlined, Inexpensive 3D Printing of the Brain and Skull

    PubMed Central

    Cash, Sydney S.

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3–4 in consumable plastic filament as described, and the total process takes 14–17 hours, almost all of which is unsupervised (preprocessing = 4–6 hr; printing = 9–11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1–5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  17. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  18. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  19. Using Cesium for 3D Thematic Visualisations on the Web

    NASA Astrophysics Data System (ADS)

    Gede, Mátyás

    2018-05-01

    Cesium (http://cesiumjs.org) is an open source, WebGL-based JavaScript library for virtual globes and 3D maps. It is an excellent tool for 3D thematic visualisations, but to use its full functionality it has to be feed with its own file format, CZML. Unfortunately, this format is not yet supported by any major GIS software. This paper intro- duces a plugin for QGIS, developed by the author, which facilitates the creation of CZML file for various types of visualisations. The usability of Cesium is also examined in various hardware/software environments.

  20. BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.

  1. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  2. Surface gloss and color perception of 3D objects.

    PubMed

    Xiao, Bei; Brainard, David H

    2008-01-01

    Two experiments explore the color perception of objects in complex scenes. The first experiment examines the color perception of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the average light reflected from the test. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what is predicted by the spatial average. The second experiment examines how people perceive color across locations on an object. We replaced the test sphere with a soccer ball that had one of its hexagonal faces colored. Observers were asked to adjust the match sphere have the same color appearance as this test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers' color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1.

  3. Surface gloss and color perception of 3D objects

    PubMed Central

    Xiao, Bei; Brainard, David H.

    2008-01-01

    Two experiments explore the color perception of objects in complex scenes. The first experiment examines the color perception of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the average light reflected from the test. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what is predicted by the spatial average. The second experiment examines how people perceive color across locations on an object. We replaced the test sphere with a soccer ball that had one of its hexagonal faces colored. Observers were asked to adjust the match sphere have the same color appearance as this test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers’ color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1. PMID:18598406

  4. Incorporation of zinc oxide nanoparticles into chitosan-collagen 3D porous scaffolds: Effect on morphology, mechanical properties and cytocompatibility of 3D porous scaffolds.

    PubMed

    Ullah, Saleem; Zainol, Ismail; Idrus, Ruszymah Hj

    2017-11-01

    The zinc oxide nanoparticles (particles size <50nm) incorporated into chitosan-collagen 3D porous scaffolds and investigated the effect of zinc oxide nanoparticles incorporation on microstructure, mechanical properties, biodegradation and cytocompatibility of 3D porous scaffolds. The 0.5%, 1.0%, 2.0% and 4.0% zinc oxide nanoparticles chitosan-collagen 3D porous scaffolds were fabricated via freeze-drying technique. The zinc oxide nanoparticles incorporation effects consisting in chitosan-collagen 3D porous scaffolds were investigated by mechanical and swelling tests, and effect on the morphology of scaffolds examined microscopically. The biodegradation and cytocompatibility tests were used to investigate the effects of zinc oxide nanoparticles incorporation on the ability of scaffolds to use for tissue engineering application. The mean pore size and swelling ratio of scaffolds were decreased upon incorporation of zinc oxide nanoparticles however, the porosity, tensile modulus and biodegradation rate were increased upon incorporation of zinc oxide nanoparticles. In vitro culture of human fibroblasts and keratinocytes showed that the zinc oxide nanoparticles facilitated cell adhesion, proliferation and infiltration of chitosan-collagen 3D porous scaffolds. It was found that the zinc oxide nanoparticles incorporation enhanced porosity, tensile modulus and cytocompatibility of chitosan-collagen 3D porous scaffolds. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Human perception considerations for 3D content creation

    NASA Astrophysics Data System (ADS)

    Green, G. Almont

    2011-03-01

    Observation and interviews with people viewing autostereoscopic 3D imagery provides evidence that there are many human perception considerations required for 3D content creation. A study was undertaken whereby it was witnessed that certain test autostereoscopic imagery elicited a highly emotional response and engagement, while other test autostereoscopic imagery was given only a passing glance. That an image can be viewed with a certain level of stereopsis does not make it compelling. By taking into consideration the manner in which humans perceive depth and the space between objects, 3D content can achieve a level of familiarity and realness that is not possible with single perspective imagery. When human perception issues are ignored, 3D imagery can be undesirable to viewers and a negative bias against 3D imagery can occur. The preparation of 3D content is more important than the display technology. Where human perception, as it is used to interpret reality, is not mimicked in the creation of 3D content, the general public typically express a negative bias against that imagery (where choices are provided). For some, the viewing of 3D content that could not exist naturally, induces physical discomfort.

  6. Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes

    NASA Astrophysics Data System (ADS)

    Funk, Wolfgang

    2004-06-01

    The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC). We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark. We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.

  7. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  8. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  9. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    NASA Astrophysics Data System (ADS)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  10. Combinatorial clustering and Its Application to 3D Polygonal Traffic Sign Reconstruction From Multiple Images

    NASA Astrophysics Data System (ADS)

    Vallet, B.; Soheilian, B.; Brédif, M.

    2014-08-01

    The 3D reconstruction of similar 3D objects detected in 2D faces a major issue when it comes to grouping the 2D detections into clusters to be used to reconstruct the individual 3D objects. Simple clustering heuristics fail as soon as similar objects are close. This paper formulates a framework to use the geometric quality of the reconstruction as a hint to do a proper clustering. We present a methodology to solve the resulting combinatorial optimization problem with some simplifications and approximations in order to make it tractable. The proposed method is applied to the reconstruction of 3D traffic signs from their 2D detections to demonstrate its capacity to solve ambiguities.

  11. View subspaces for indexing and retrieval of 3D models

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel

    2010-02-01

    View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.

  12. Comparison of full 3-D, thin-film 3-D, and thin-film plate analyses of a postbuckled embedded delamination

    NASA Technical Reports Server (NTRS)

    Whitcomb, John D.

    1989-01-01

    Strain-energy release rates are often used to predict when delamination growth will occur in laminates under compression. Because of the inherently high computational cost of performing such analyses, less rigorous analyses such as thin-film plate analysis were used. The assumptions imposed by plate theory restrict the analysis to the calculation of total strain energy, G(sub t). The objective is to determine the accuracy of thin-film plate analysis by comparing the distribution of G(sub t) calculated using fully three dimensional (3D), thin-film 3D, and thin-film plate analyses. Thin-film 3D analysis is the same as thin-film plate analysis, except 3D analysis is used to model the sublaminate. The 3D stress analyses were performed using the finite element program NONLIN3D. The plate analysis results were obtained from published data, which used STAGS. Strain-energy release rates were calculated using variations of the virtual crack closure technique. The results demonstrate that thin-film plate analysis can predict the distribution of G(sub t) quite well, at least for the configurations considered. Also, these results verify the accuracy of the strain-energy release rate procedure for plate analysis.

  13. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  14. Repeated MDMA administration increases MDMA-produced locomotor activity and facilitates the acquisition of MDMA self-administration: role of dopamine D2 receptor mechanisms.

    PubMed

    van de Wetering, Ross; Schenk, Susan

    2017-04-01

    Repeated exposure to ±3, 4-methylenedioxymethamphetamine (MDMA) produces sensitization to MDMA-produced hyperactivity, but the mechanisms underlying the development of this sensitized response or the relationship to the reinforcing effects of MDMA is unknown. This study determined the effect of a sensitizing regimen of MDMA exposure on the acquisition of MDMA self-administration and investigated the role of dopamine D 2 receptor mechanisms. Rats received the selective D 2 antagonist, eticlopride (0.0 or 0.3 mg/kg, i.p.) and MDMA (0.0 or 10.0 mg/kg, i.p.) during a five-day pretreatment regimen. Two days following the final session, the locomotor activating effects of MDMA (5 mg/kg, i.p.) and the latency to acquisition of MDMA self-administration were determined. Pretreatment with MDMA enhanced the locomotor activating effects of MDMA and facilitated the acquisition of MDMA self-administration. Administration of eticlopride during MDMA pretreatment completely blocked the development of sensitization to MDMA-produced hyperactivity but failed to significantly alter the facilitated acquisition of MDMA self-administration. Pretreatment with eticlopride alone facilitated the acquisition of self-administration. These data suggest that repeated MDMA exposure sensitized both the locomotor activating and reinforcing effects of MDMA. Activation of D 2 receptors during MDMA pretreatment appears critical for the development of sensitization to MDMA-produced hyperactivity. The role of D 2 receptor mechanisms in the development of sensitization to the reinforcing effects of MDMA is equivocal.

  15. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  16. TIPdb-3D: the three-dimensional structure database of phytochemicals from Taiwan indigenous plants

    PubMed Central

    Tung, Chun-Wei; Lin, Ying-Chi; Chang, Hsun-Shuo; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng

    2014-01-01

    The rich indigenous and endemic plants in Taiwan serve as a resourceful bank for biologically active phytochemicals. Based on our TIPdb database curating bioactive phytochemicals from Taiwan indigenous plants, this study presents a three-dimensional (3D) chemical structure database named TIPdb-3D to support the discovery of novel pharmacologically active compounds. The Merck Molecular Force Field (MMFF94) was used to generate 3D structures of phytochemicals in TIPdb. The 3D structures could facilitate the analysis of 3D quantitative structure–activity relationship, the exploration of chemical space and the identification of potential pharmacologically active compounds using protein–ligand docking. Database URL: http://cwtung.kmu.edu.tw/tipdb. PMID:24930145

  17. Facilitation of extinction of operant behaviour in C57Bl/6 mice by chlordiazepoxide and D-cycloserine.

    PubMed

    Leslie, Julian C; Norwood, Kelly; Kennedy, Paul J; Begley, Michael; Shaw, David

    2012-09-01

    Effects on the extinction of GABAergic drug, chlordiazepoxide (CDP), and glutamatergic drug, D: -cycloserine (DCS), in C57BL/6 mice were compared. Following a palatability test (Experiment 1), Experiments 2-6 involved food-reinforced lever press training followed by extinction sessions at 1- or 4-day intervals. The effects of drugs were examined. Experiment 7 involved a two-lever task. CDP did not affect food palatability (Experiment 1), but facilitated extinction when administered prior to extinction sessions via intracerebral (Experiment 2) or peripheral administration at 1-day (Experiments 3-7) or 4-day intervals (Experiment 6). Reducing the amount of training prior to extinction reduced the delay in the effect of CDP typically seen, and CDP had a larger effect in early sessions on mice that had received less training (Experiment 3). There was some evidence that CDP could be blocked by flumazenil (Experiment 4), and CDP withdrawal reversed extinction facilitation (Experiments 5 and 7). With 4-day intervals, DCS administered immediately following extinction sessions, or pre-session CDP, facilitated extinction with 48-trial sessions (experiment 6B). With six-trial sessions, the co-administration of post-session DCS enhanced facilitation produced by pre-session CDP (experiment 6A). Finally, CDP facilitated extinction in a dose-related fashion following training on a two-lever food-reinforced task (Experiment 7). The findings are consistent with the hypotheses that two neurotransmitter systems have different roles in operant extinction and that glutamatergic systems are involved in extinction learning and GABAergic systems involved in the expression of that learning. This parallels findings with extinction following Pavlovian conditioning, which has been more extensively investigated.

  18. Shape‐Controlled, Self‐Wrapped Carbon Nanotube 3D Electronics

    PubMed Central

    Wang, Huiliang; Wang, Yanming; Tee, Benjamin C.‐K.; Kim, Kwanpyo; Lopez, Jeffrey; Cai, Wei

    2015-01-01

    The mechanical flexibility and structural softness of ultrathin devices based on organic thin films and low‐dimensional nanomaterials have enabled a wide range of applications including flexible display, artificial skin, and health monitoring devices. However, both living systems and inanimate systems that are encountered in daily lives are all 3D. It is therefore desirable to either create freestanding electronics in a 3D form or to incorporate electronics onto 3D objects. Here, a technique is reported to utilize shape‐memory polymers together with carbon nanotube flexible electronics to achieve this goal. Temperature‐assisted shape control of these freestanding electronics in a programmable manner is demonstrated, with theoretical analysis for understanding the shape evolution. The shape control process can be executed with prepatterned heaters, desirable for 3D shape formation in an enclosed environment. The incorporation of carbon nanotube transistors, gas sensors, temperature sensors, and memory devices that are capable of self‐wrapping onto any irregular shaped‐objects without degradations in device performance is demonstrated. PMID:27980972

  19. 3D-printed tracheoesophageal puncture and prosthesis placement simulator.

    PubMed

    Barber, Samuel R; Kozin, Elliott D; Naunheim, Matthew R; Sethi, Rosh; Remenschneider, Aaron K; Deschler, Daniel G

    A tracheoesophageal prosthesis (TEP) allows for speech after total laryngectomy. However, TEP placement is technically challenging, requiring a coordinated series of steps. Surgical simulators improve technical skills and reduce operative time. We hypothesize that a reusable 3-dimensional (3D)-printed TEP simulator will facilitate comprehension and rehearsal prior to actual procedures. The simulator was designed using Fusion360 (Autodesk, San Rafael, CA). Components were 3D-printed in-house using an Ultimaker 2+ (Ultimaker, Netherlands). Squid simulated the common tracheoesophageal wall. A Blom-Singer TEP (InHealth Technologies, Carpinteria, CA) replicated placement. Subjects watched an instructional video and completed pre- and post-simulation surveys. The simulator comprised 3D-printed parts: the esophageal lumen and superficial stoma. Squid was placed between components. Ten trainees participated. Significant differences existed between junior and senior residents with surveys regarding anatomy knowledge(p<0.05), technical details(p<0.01), and equipment setup(p<0.01). Subjects agreed that simulation felt accurate, and rehearsal raised confidence in future procedures. A 3D-printed TEP simulator is feasible for surgical training. Simulation involving multiple steps may accelerate technical skills and improve education. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Current drug treatments targeting dopamine D3 receptor.

    PubMed

    Leggio, Gian Marco; Bucolo, Claudio; Platania, Chiara Bianca Maria; Salomone, Salvatore; Drago, Filippo

    2016-09-01

    Dopamine receptors (DR) have been extensively studied, but only in recent years they became object of investigation to elucidate the specific role of different subtypes (D1R, D2R, D3R, D4R, D5R) in neural transmission and circuitry. D1-like receptors (D1R and D5R) and D2-like receptors (D2R, D2R and D4R) differ in signal transduction, binding profile, localization in the central nervous system and physiological effects. D3R is involved in a number of pathological conditions, including schizophrenia, Parkinson's disease, addiction, anxiety, depression and glaucoma. Development of selective D3R ligands has been so far challenging, due to the high sequence identity and homology shared by D2R and D3R. As a consequence, despite a rational design of selective DR ligands has been carried out, none of currently available medicines selectively target a given D2-like receptor subtype. The availability of the D3R ligand [(11)C]-(+)-PHNO for positron emission tomography studies in animal models as well as in humans, allows researchers to estimate the expression of D3R in vivo; displacement of [(11)C]-(+)-PHNO binding by concurrent drug treatments is used to estimate the in vivo occupancy of D3R. Here we provide an overview of studies indicating D3R as a target for pharmacological therapy, and a review of market approved drugs endowed with significant affinity at D3R that are used to treat disorders where D3R plays a relevant role. Copyright © 2016 Elsevier Inc. All rights reserved.