Science.gov

Sample records for 3d pose estimation

  1. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  2. SIFT algorithm-based 3D pose estimation of femur.

    PubMed

    Zhang, Xuehe; Zhu, Yanhe; Li, Changle; Zhao, Jie; Li, Ge

    2014-01-01

    To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D-3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning. PMID:25226990

  3. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  4. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  5. Head pose estimation from a 2D face image using 3D face morphing with depth parameters.

    PubMed

    Kong, Seong G; Mbouna, Ralph Oyini

    2015-06-01

    This paper presents estimation of head pose angles from a single 2D face image using a 3D face model morphed from a reference face model. A reference model refers to a 3D face of a person of the same ethnicity and gender as the query subject. The proposed scheme minimizes the disparity between the two sets of prominent facial features on the query face image and the corresponding points on the 3D face model to estimate the head pose angles. The 3D face model used is morphed from a reference model to be more specific to the query face in terms of the depth error at the feature points. The morphing process produces a 3D face model more specific to the query image when multiple 2D face images of the query subject are available for training. The proposed morphing process is computationally efficient since the depth of a 3D face model is adjusted by a scalar depth parameter at feature points. Optimal depth parameters are found by minimizing the disparity between the 2D features of the query face image and the corresponding features on the morphed 3D model projected onto 2D space. The proposed head pose estimation technique was evaluated on two benchmarking databases: 1) the USF Human-ID database for depth estimation and 2) the Pointing'04 database for head pose estimation. Experiment results demonstrate that head pose estimation errors in nodding and shaking angles are as low as 7.93° and 4.65° on average for a single 2D input face image. PMID:25706638

  6. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  7. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  8. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  9. ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation

    NASA Technical Reports Server (NTRS)

    Richardson, A. O.

    1996-01-01

    This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.

  10. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.

    PubMed

    Nosrati, Masoud S; Abugharbieh, Rafeef; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Hamarneh, Ghassan

    2016-01-01

    In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. PMID:26151933

  11. NavOScan: hassle-free handheld 3D scanning with automatic multi-view registration based on combined optical and inertial pose estimation

    NASA Astrophysics Data System (ADS)

    Munkelt, C.; Kleiner, B.; Thorhallsson, T.; Mendoza, C.; Bräuer-Burchardt, C.; Kühmstedt, P.; Notni, G.

    2013-05-01

    Portable 3D scanners with low measurement uncertainty are ideally suited for capturing the 3D shape of objects right in their natural environment. However, elaborate manual post processing was usually necessary to build a complete 3D model from several overlapping scans (multiple views), or expensive or complex additional hardware (like trackers etc.) was needed. On the contrary, the NavOScan project[1] aims at fully automatic multi-view 3D scan assembly through a Navigation Unit attached to the scanner. This light weight device combines an optical tracking system with an inertial measurement unit (IMU) for robust relative scanner position estimation. The IMU provides robustness against swift scanner movements during view changes, while the wide angle, high dynamic range (HDR) optical tracker focused on the measurement object and its background ensures accurate sensor position estimations. The underlying software framework, partly implemented in hardware (FPGA) for performance reasons, fusions both data streams in real time and estimates the navigation unit's current pose. Using this pose to calculate the starting solution of the Iterative Closest Point registration approach allows for automatic registration of multiple 3D scans. After finishing the individual scans required to fully acquire the object in question, the operator is readily presented with its finalized complete 3D model! The paper presents an overview over the NavOScan architecture, highlights key aspects of the registration and navigation pipeline and shows several measurement examples obtained with the Navigation Unit attached to a hand held structured-light 3D scanner.

  12. 3-D Pose Presentation for Training Applications

    ERIC Educational Resources Information Center

    Fox, Kaitlyn; Whitehead, Anthony

    2011-01-01

    Purpose: In the authors' experience, the biggest issue with pose-based exergames is the difficulty in effectively communicating a three-dimensional pose to a user to facilitate a thorough understanding for accurate pose replication. The purpose of this paper is to examine options for pose presentation. Design/methodology/approach: The authors…

  13. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  14. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes. PMID:23868784

  15. Piecewise-rigid 2D-3D registration for pose estimation of snake-like manipulator using an intraoperative x-ray projection

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Kutzer, M. D.; Taylor, R. H.; Armand, M.

    2014-03-01

    Background: Snake-like dexterous manipulators may offer significant advantages in minimally-invasive surgery in areas not reachable with conventional tools. Precise control of a wire-driven manipulator is challenging due to factors such as cable deformation, unknown internal (cable friction) and external forces, thus requiring correcting the calibration intraoperatively by determining the actual pose of the manipulator. Method: A method for simultaneously estimating pose and kinematic configuration of a piecewise-rigid object such as a snake-like manipulator from a single x-ray projection is presented. The method parameterizes kinematics using a small number of variables (e.g., 5), and optimizes them simultaneously with the 6 degree-of-freedom pose parameter of the base link using an image similarity between digitally reconstructed radiographs (DRRs) of the manipulator's attenuation model and the real x-ray projection. Result: Simulation studies assumed various geometric magnifications (1.2-2.6) and out-of-plane angulations (0°-90°) in a scenario of hip osteolysis treatment, which demonstrated the median joint angle error was 0.04° (for 2.0 magnification, +/-10° out-of-plane rotation). Average computation time was 57.6 sec with 82,953 function evaluations on a mid-range GPU. The joint angle error remained lower than 0.07° while out-of-plane rotation was 0°-60°. An experiment using video images of a real manipulator demonstrated a similar trend as the simulation study except for slightly larger error around the tip attributed to accumulation of errors induced by deformation around each joint not modeled with a simple pin joint. Conclusions: The proposed approach enables high precision tracking of a piecewise-rigid object (i.e., a series of connected rigid structures) using a single projection image by incorporating prior knowledge about the shape and kinematic behavior of the object (e.g., each rigid structure connected by a pin joint parameterized by a

  16. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  17. Determination of vertebral pose in 3D by minimization of vertebral asymmetry

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    The vertebral pose in three dimensions (3D) may provide valuable information for quantitative clinical measurements or aid the initialization of image analysis techniques. We propose a method for automated determination of the vertebral pose in 3D that, in an iterative registration scheme, estimates the position and rotation of the vertebral coordinate system in 3D images. By searching for the hypothetical points, which are located where the boundaries of anatomical structures would have maximal symmetrical correspondences when mirrored over the vertebral planes, the asymmetry of vertebral anatomical structures is minimized. The method was evaluated on 14 normal and 14 scoliotic vertebrae in images acquired by computed tomography (CT). For each vertebra, 1000 randomly initialized experiments were performed. The results show that the vertebral pose can be successfully determined in 3D with mean accuracy of 0.5mm and 0.6° and mean precision of 0.17mm and 0.17. according to the 3D position and 3D rotation, respectively.

  18. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking.

    PubMed

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-08-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object's pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  19. Aircraft recognition and pose estimation

    NASA Astrophysics Data System (ADS)

    Hmam, Hatem; Kim, Jijoong

    2000-05-01

    This work presents a geometry based vision system for aircraft recognition and pose estimation using single images. Pose estimation improves the tracking performance of guided weapons with imaging seekers, and is useful in estimating target manoeuvres and aim-point selection required in the terminal phase of missile engagements. After edge detection and straight-line extraction, a hierarchy of geometric reasoning algorithms is applied to form line clusters (or groupings) for image interpretation. Assuming a scaled orthographic projection and coplanar wings, lateral symmetry inherent in the airframe provides additional constraints to further reject spurious line clusters. Clusters that accidentally pass all previous tests are checked against the original image and are discarded. Valid line clusters are then used to deduce aircraft viewing angles. By observing that the leading edges of wings of a number of aircraft of interest are within 45 to 65 degrees from the symmetry axis, a bounded range of aircraft viewing angles can be found. This generic property offers the advantage of not requiring the storage of complete aircraft models viewed from all aspects, and can handle aircraft with flexible wings (e.g. F111). Several aircraft images associated with various spectral bands (i.e. visible and infra-red) are finally used to evaluate the system's performance.

  20. Global regular solutions for the 3D Kawahara equation posed on unbounded domains

    NASA Astrophysics Data System (ADS)

    Larkin, Nikolai A.; Simões, Márcio Hiran

    2016-08-01

    An initial boundary value problem for the 3D Kawahara equation posed on a channel-type domain was considered. The existence and uniqueness results for global regular solutions as well as exponential decay of small solutions in the H 2-norm were established.

  1. Global regular solutions for the 3D Zakharov-Kuznetsov equation posed on unbounded domains

    NASA Astrophysics Data System (ADS)

    Larkin, N. A.

    2015-09-01

    An initial-boundary value problem for the 3D Zakharov-Kuznetsov equation posed on unbounded domains is considered. Existence and uniqueness of a global regular solution as well as exponential decay of the H2-norm for small initial data are proven.

  2. Human Pose Estimation Using Consistent Max Covering.

    PubMed

    Jiang, Hao

    2011-09-01

    A novel consistent max-covering method is proposed for human pose estimation. We focus on problems in which a rough foreground estimation is available. Pose estimation is formulated as a jigsaw puzzle problem in which the body part tiles maximally cover the foreground region, match local image features, and satisfy body plan and color constraints. This method explicitly imposes a global shape constraint on the body part assembly. It anchors multiple body parts simultaneously and introduces hyperedges in the part relation graph, which is essential for detecting complex poses. Using multiple cues in pose estimation, our method is resistant to cluttered foregrounds. We propose an efficient linear method to solve the consistent max-covering problem. A two-stage relaxation finds the solution in polynomial time. Our experiments on a variety of images and videos show that the proposed method is more robust than previous locally constrained methods. PMID:21576747

  3. Combining focused MACE filters for pose estimation

    NASA Astrophysics Data System (ADS)

    Al-Ghoneim, Khaled A.; Vijaya Kumar, Bhagavatula

    1998-03-01

    In this paper we introduce the notion of a focused filter and discuss its application to the problem of pose estimation. A focused filter is a correlation filter designed to give a maximum response at one pose of the target. This pose is called the focus of the filter. As the actual pose of the target deviates from the focus, the filter's response should exhibit a graceful (and controlled) degradation. When presented with a test image, the responses of all focused filters are collected in a vector. This new vector will have a peak with the vector elements exhibiting the same shape as that used in designing one focused filter. This similarity is exploited for pose estimation by matching the filter responses to the designed shape. Simulation experiments are used to illustrate the potential of the new design method.

  4. Particle swarm optimization on low dimensional pose manifolds for monocular human pose estimation

    NASA Astrophysics Data System (ADS)

    Brauer, Jürgen; Hübner, Wolfgang; Arens, Michael

    2013-10-01

    Automatic assessment of situations with modern security and surveillance systems requires sophisticated discrimination capabilities. Therefore, action recognition, e.g. in terms of person-person or person-object interactions, is an essential core component of any surveillance system. A subclass of recent action recognition approaches are based on space time volumes, which are generated from trajectories of multiple anatomical landmarks like hands or shoulders. A general prerequisite of these methods is the robust estimation of the body pose, i.e. a simplified body model consisting of several anatomical landmarks. In this paper we address the problem of estimating 3D poses from monocular person image sequences. The first stage of our algorithm is the localization of body parts in the 2D image. For this, a part based object detection method is used, which in previous work has been shown to provide a sufficient basis for person detection and landmark estimation in a single step. The output of this processing step is a probability distribution for each landmark and image indicating possible locations of this landmark in image coordinates. The second stage of our algorithm searches for 3D pose estimates that best t to the 15 landmark probability distributions. For resolving ambiguities introduced by uncertainty in the locations of the landmarks, we perform an optimization within a Particle Swarm Optimization (PSO) framework, where each pose hypothesis is represented by a particle. Since the search in the high-dimensional 3D pose search space needs further guidance to deal with the inherently restricted 2D input information, we propose a new compact representation of motion sequences provided by motion capture databases. Poses of a motion sequence are embedded in a low-dimensional manifold. We represent each motion sequence by a compact representation referred to as pose splines using a small number of supporting point poses. The PSO algorithm can be extended to perform

  5. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  6. Joint 3d Estimation of Vehicles and Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.; Geiger, A.

    2015-08-01

    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

  7. Pose estimation of non-cooperative targets without feature tracking

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Liu, Zongming; Lu, Shan; Sang, Nong

    2015-03-01

    Pose estimation is playing the vital role in the final approach phase of two spacecraft, one is the target spacecraft and the other one is the observation spacecraft. Traditional techniques are usually based on feature tracking, which will not work when sufficient features are unavailable. To deal with this problem, we present a stereo camera-based pose estimation method without feature tracking. First, stereo vision is used to reconstruct 2.5D of the target spacecraft and a 3D reconstruction is presented by merged all the point cloud of each viewpoint. Then a target-coordinate system is built using the reconstruction results. Finally, point cloud registration algorithm is used to solve the current pose between the observation spacecraft and the target spacecraft. Experimental results show that both the position errors and the attitude errors satisfy the requirements of pose estimation. The method provides a solution for pose estimation without knowing the information of the targets and this algorithm is with wider application range compared with the other algorithms based on feature tracking.

  8. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  9. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  10. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062

  11. Pose Estimation and Mapping Using Catadioptric Cameras with Spherical Mirrors

    NASA Astrophysics Data System (ADS)

    Ilizirov, Grigory; Filin, Sagi

    2016-06-01

    Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system's parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.

  12. Automatic pose initialization for accurate 2D/3D registration applied to abdominal aortic aneurysm endovascular repair

    NASA Astrophysics Data System (ADS)

    Miao, Shun; Lucas, Joseph; Liao, Rui

    2012-02-01

    Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative 3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods based on simulated X-ray projection images using multiple image planes have been shown to be able to provide satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima. User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful 2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our previously proposed bi-plane 2-D/3-D registration method for AAA intervention [4]. The proposed method detects vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the 3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is shown to be able to provide a good initialization for 2-D/3-D registration in [4], thus making the workflow fully automatic.

  13. Spatio-Temporal Matching for Human Pose Estimation in Video.

    PubMed

    Zhou, Feng; Torre, Fernando De la

    2016-08-01

    Detection and tracking humans in videos have been long-standing problems in computer vision. Most successful approaches (e.g., deformable parts models) heavily rely on discriminative models to build appearance detectors for body joints and generative models to constrain possible body configurations (e.g., trees). While these 2D models have been successfully applied to images (and with less success to videos), a major challenge is to generalize these models to cope with camera views. In order to achieve view-invariance, these 2D models typically require a large amount of training data across views that is difficult to gather and time-consuming to label. Unlike existing 2D models, this paper formulates the problem of human detection in videos as spatio-temporal matching (STM) between a 3D motion capture model and trajectories in videos. Our algorithm estimates the camera view and selects a subset of tracked trajectories that matches the motion of the 3D model. The STM is efficiently solved with linear programming, and it is robust to tracking mismatches, occlusions and outliers. To the best of our knowledge this is the first paper that solves the correspondence between video and 3D motion capture data for human pose detection. Experiments on the CMU motion capture, Human3.6M, Berkeley MHAD and CMU MAD databases illustrate the benefits of our method over state-of-the-art approaches. PMID:26863647

  14. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  15. Exhaustive linearization for robust camera pose and focal length estimation.

    PubMed

    Penate-Sanchez, Adrian; Andrade-Cetto, Juan; Moreno-Noguer, Francesc

    2013-10-01

    We propose a novel approach for the estimation of the pose and focal length of a camera from a set of 3D-to-2D point correspondences. Our method compares favorably to competing approaches in that it is both more accurate than existing closed form solutions, as well as faster and also more accurate than iterative ones. Our approach is inspired on the EPnP algorithm, a recent O(n) solution for the calibrated case. Yet we show that considering the focal length as an additional unknown renders the linearization and relinearization techniques of the original approach no longer valid, especially with large amounts of noise. We present new methodologies to circumvent this limitation termed exhaustive linearization and exhaustive relinearization which perform a systematic exploration of the solution space in closed form. The method is evaluated on both real and synthetic data, and our results show that besides producing precise focal length estimation, the retrieved camera pose is almost as accurate as the one computed using the EPnP, which assumes a calibrated camera. PMID:23969384

  16. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  17. A Survey on Model Based Approaches for 2D and 3D Visual Human Pose Recovery

    PubMed Central

    Perez-Sala, Xavier; Escalera, Sergio; Angulo, Cecilio; Gonzàlez, Jordi

    2014-01-01

    Human Pose Recovery has been studied in the field of Computer Vision for the last 40 years. Several approaches have been reported, and significant improvements have been obtained in both data representation and model design. However, the problem of Human Pose Recovery in uncontrolled environments is far from being solved. In this paper, we define a general taxonomy to group model based approaches for Human Pose Recovery, which is composed of five main modules: appearance, viewpoint, spatial relations, temporal consistence, and behavior. Subsequently, a methodological comparison is performed following the proposed taxonomy, evaluating current SoA approaches in the aforementioned five group categories. As a result of this comparison, we discuss the main advantages and drawbacks of the reviewed literature. PMID:24594613

  18. Face Pose Recognition Based on Monocular Digital Imagery and Stereo-Based Estimation of its Precision

    NASA Astrophysics Data System (ADS)

    Gorbatsevich, V.; Vizilter, Yu.; Knyaz, V.; Zheltov, S.

    2014-06-01

    A technique for automated face detection and its pose estimation using single image is developed. The algorithm includes: face detection, facial features localization, face/background segmentation, face pose estimation, image transformation to frontal view. Automatic face/background segmentation is performed by original graph-cut technique based on detected feature points. The precision of face orientation estimation based on monocular digital imagery is addressed. The approach for precision estimation is developed based on comparison of synthesized facial 2D images and scanned face 3D model. The software for modelling and measurement is developed. The special system for non-contact measurements is created. Required set of 3D real face models and colour facial textures is obtained using this system. The precision estimation results demonstrate the precision of face pose estimation enough for further successful face recognition.

  19. Efficient human pose estimation from single depth images.

    PubMed

    Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew

    2013-12-01

    We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:24136424

  20. Efficient Human Pose Estimation from Single Depth Images.

    PubMed

    Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew

    2012-10-26

    We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image, without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features, and parallelizable decision forests, both approaches can run super-realtime on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:23109523

  1. A model-based 3D template matching technique for pose acquisition of an uncooperative space object.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

  2. A Model-Based 3D Template Matching Technique for Pose Acquisition of an Uncooperative Space Object

    PubMed Central

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

  3. Human Pose Estimation from Video and IMUs.

    PubMed

    Marcard, Timo von; Pons-Moll, Gerard; Rosenhahn, Bodo

    2016-08-01

    In this work, we present an approach to fuse video with sparse orientation data obtained from inertial sensors to improve and stabilize full-body human motion capture. Even though video data is a strong cue for motion analysis, tracking artifacts occur frequently due to ambiguities in the images, rapid motions, occlusions or noise. As a complementary data source, inertial sensors allow for accurate estimation of limb orientations even under fast motions. However, accurate position information cannot be obtained in continuous operation. Therefore, we propose a hybrid tracker that combines video with a small number of inertial units to compensate for the drawbacks of each sensor type: on the one hand, we obtain drift-free and accurate position information from video data and, on the other hand, we obtain accurate limb orientations and good performance under fast motions from inertial sensors. In several experiments we demonstrate the increased performance and stability of our human motion tracker. PMID:26829774

  4. Pose estimation quality assessment for intra-operative image guidance systems

    NASA Astrophysics Data System (ADS)

    Egli, Adrian; Kleinszig, Gerhard; John, Adrian; Fernandez, Alberto; Cardelino, Juan

    2013-03-01

    In trauma and orthopedic surgery screw assessment and trajectory prediction using two-dimensional X-ray images is very difficult due to projected 3D information. However screw assessment can be done with multiple X-ray images. If the X-ray image contains the projected implant geometry it can be used as global coordinate reference. Thereby multiple independent X-ray images can be synchronized by estimating the implant pose in each single image. Consequently high accuracy pose estimation is fundamental. To measure the outcome quality an evaluation process has been designed. The evaluation process investigates in its first step several clinical intra-operative anterior-posterior (AP) and medio-lateral (ML) X-ray images which have been analyzed using a manual pose estimation method. With the manual method the six 3D parameters of the implant pose are estimated. These parameters define as well the camera pose relative to the implant. Based on the pose parameters of all clinical cases the capturing range for typical AP and ML images is statistically defined. The implant was attached to a phantom with 16 steel balls which allows to calculate the ground truth pose. Afterwards several X-ray images of the phantom are taken within the statistically defined capturing range. With the known ground truth different pose estimation methods can be compared. For each method the estimation quality can be calculated. In addition this error calculation can be used to adjust the initial manually determined capturing range. This paper explains the error evaluation process and describes how to validate pose estimation methods for clinical applications.

  5. Uncooperative pose estimation with a LIDAR-based system

    NASA Astrophysics Data System (ADS)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-05-01

    This paper aims at investigating the performance of a LIDAR-based system for pose determination of uncooperative targets. This problem is relevant to both debris removal and on-orbit servicing missions, and requires the adoption of suitable electro-optical sensors on board of a chaser platform, as well as model-based techniques for target detection and pose estimation. In this paper, a three dimensional approach is pursued in which the point cloud generated by a LIDAR is exploited for pose estimation. Specifically, the condition of close proximity flight to a large debris is considered, in which the relative motion determines a large variation of debris appearance and coverage in the sensor field of view, thus producing challenging conditions for pose estimation. A customized three dimensional Template Matching approach is proposed for fast and reliable pose initial acquisition, while pose tracking is carried out with an Iterative Closest Point algorithm exploiting different measurement-model matching techniques. Specific solutions are envisaged to speed algorithm convergence and limit the size of the point clouds used for pose initial acquisition and tracking to allow autonomous on-board operation. To investigate proposed approach effectiveness and achievable pose accuracy, a numerical simulation environment is developed implementing realistic debris geometry, debris-chaser close-proximity flight, and sensor operation. Results demonstrate algorithm capability of operating with sparse point clouds and large pose variations, while achieving sub-degree and sub-centimeter accuracy in relative attitude and position, respectively.

  6. Developing rigid constraint for the estimation of pose and structure from a single image.

    PubMed

    Wei, Bao-Gang; Liu, Yong-Huai

    2004-07-01

    Pose and structure estimation from a single image is a fundamental problem in machine vision and multiple sensor fusion and integration. In this paper we propose using rigid constraints described in different coordinate frames to iteratively estimate structural and camera pose parameters. Using geometric properties of reflected correspondences we put forward a new concept, the reflected pole of a rigid transformation. The reflected pole represents a general analysis of transformations that can be applied to both 2D and 3D transformations. We demonstrate how the concept is applied to calibration by proposing an iterative method to estimate the structural parameters of objects. The method is based on a coarse-to-fine strategy in which initial estimation is obtained through a classical linear algorithm which is then refined by iteration. For a comparative study of performance, we also implemented an extended motion estimation algorithm (from 2D-2D to 3D-2D case) based on epipolar geometry. PMID:15495305

  7. Accurate pose estimation using single marker single camera calibration system

    NASA Astrophysics Data System (ADS)

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  8. Vision-based pose estimation for cooperative space objects

    NASA Astrophysics Data System (ADS)

    Zhang, Haopeng; Jiang, Zhiguo; Elgammal, Ahmed

    2013-10-01

    Imaging sensors are widely used in aerospace recently. In this paper, a vision-based approach for estimating the pose of cooperative space objects is proposed. We learn generative model for each space object based on homeomorphic manifold analysis. Conceptual manifold is used to represent pose variation of captured images of the object in visual space, and nonlinear functions mapping between conceptual manifold representation and visual inputs are learned. Given such learned model, we estimate the pose of a new image by minimizing a reconstruction error via a traversal procedure along the conceptual manifold. Experimental results on the simulated image dataset show that our approach is effective for 1D and 2D pose estimation.

  9. Pose estimation and frontal face detection for face recognition

    NASA Astrophysics Data System (ADS)

    Lim, Eng Thiam; Wang, Jiangang; Xie, Wei; Ronda, Venkarteswarlu

    2005-05-01

    This paper proposes a pose estimation and frontal face detection algorithm for face recognition. Considering it's application in a real-world environment, the algorithm has to be robust yet computationally efficient. The main contribution of this paper is the efficient face localization, scale and pose estimation using color models. Simulation results showed very low computational load when compare to other face detection algorithm. The second contribution is the introduction of low dimensional statistical face geometrical model. Compared to other statistical face model the proposed method models the face geometry efficiently. The algorithm is demonstrated on a real-time system. The simulation results indicate that the proposed algorithm is computationally efficient.

  10. A pose estimation method for unmanned ground vehicles in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Tamjidi, Amirhossein; Ye, Cang

    2012-06-01

    This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.

  11. Coevrage Estimation of Geosensor in 3d Vector Environments

    NASA Astrophysics Data System (ADS)

    Afghantoloee, A.; Doodman, S.; Karimipour, F.; Mostafavi, M. A.

    2014-10-01

    Sensor deployment optimization to achieve the maximum spatial coverage is one of the main issues in Wireless geoSensor Networks (WSN). The model of the environment is an imperative parameter that influences the accuracy of geosensor coverage. In most of recent studies, the environment has been modeled by Digital Surface Model (DSM). However, the advances in technology to collect 3D vector data at different levels, especially in urban models can enhance the quality of geosensor deployment in order to achieve more accurate coverage estimations. This paper proposes an approach to calculate the geosensor coverage in 3D vector environments. The approach is applied on some case studies and compared with DSM based methods.

  12. Vision based object pose estimation for mobile robots

    NASA Technical Reports Server (NTRS)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  13. Hand surface area estimation formula using 3D anthropometry.

    PubMed

    Hsu, Yao-Wen; Yu, Chi-Yuang

    2010-11-01

    Hand surface area is an important reference in occupational hygiene and many other applications. This study derives a formula for the palm surface area (PSA) and hand surface area (HSA) based on three-dimensional (3D) scan data. Two-hundred and seventy subjects, 135 males and 135 females, were recruited for this study. The hand was measured using a high-resolution 3D hand scanner. Precision and accuracy of the scanner is within 0.67%. Both the PSA and HSA were computed using the triangular mesh summation method. A comparison between this study and previous textbook values (such as in the U.K. teaching text and Lund and Browder chart discussed in the article) was performed first to show that previous textbooks overestimated the PSA by 12.0% and HSA by 8.7% (for the male, PSA 8.5% and HSA 4.7%, and for the female, PSA 16.2% and HSA 13.4%). Six 1D measurements were then extracted semiautomatically for use as candidate estimators for the PSA and HSA estimation formula. Stepwise regressions on these six 1D measurements and variable dependency test were performed. Results show that a pair of measurements (hand length and hand breadth) were able to account for 96% of the HSA variance and up to 98% of the PSA variance. A test of the gender-specific formula indicated that gender is not a significant factor in either the PSA or HSA estimation. PMID:20865628

  14. Joint tracking, pose estimation, and identification using HRRR data

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald P. S.; Rago, Constantino; Zajic, Tim; Musick, Stanton; Mehra, Raman K.

    2000-08-01

    The work presented here is pat of a generalization of Bayesian filtering and estimation theory to the problem of multisource, multitarget, multi-evidence unified joint detection, tracking, and target ID developed by Lockheed Martin Tactical Defense Systems and Scientific Systems Co., Inc. Our approach to robust joint target identification and tracking was to take the StaF algorithm and integrate it with a Bayesian nonlinear filter, where target position, velocity, pose, and type could then be determined simultaneously via maximum a posteriori estimation. The basis for the integration between the tracker and classifier is base don 'finite-set statistics' (FISST). The theoretical development of FISST is a Lockheed Martin ongoing project since 1994. The specific problem addressed in this paper is that of robust joint target identification and tracking via fusion of high range resolution radar (HRRR) - from the automatic radar target identification (ARTI) data base - signatures and radar track data. A major problem in HRRR ATR is the computational load created by having to match observations against target models for every feasible pose. If pose could be estimated efficiently by a filtering algorithm from track data, the ATR search space would be greatly reduced. On the other hand, HRRR ATR algorithms produce useful information about pose which could potentially aid the track-filtering process as well. We have successfully demonstrated the former concept of 'loose integration' integrating the tracker and classifier for three different type of targets moving on 2D tracks.

  15. An anti-disturbing real time pose estimation method and system

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new

  16. Articulated and Generalized Gaussian Kernel Correlation for Human Pose Estimation.

    PubMed

    Ding, Meng; Fan, Guoliang

    2016-02-01

    In this paper, we propose an articulated and generalized Gaussian kernel correlation (GKC)-based framework for human pose estimation. We first derive a unified GKC representation that generalizes the previous sum of Gaussians (SoG)-based methods for the similarity measure between a template and an observation both of which are represented by various SoG variants. Then, we develop an articulated GKC (AGKC) by integrating a kinematic skeleton in a multivariate SoG template that supports subject-specific shape modeling and articulated pose estimation for both the full body and the hands. We further propose a sequential (body/hand) pose tracking algorithm by incorporating three regularization terms in the AGKC function, including visibility, intersection penalty, and pose continuity. Our tracking algorithm is simple yet effective and computationally efficient. We evaluate our algorithm on two benchmark depth data sets. The experimental results are promising and competitive when compared with the state-of-the-art algorithms. PMID:26672042

  17. A novel regularization method for optical flow-based head pose estimation

    NASA Astrophysics Data System (ADS)

    Vater, Sebastian; Mann, Guillermo; Puente León, Fernando

    2015-05-01

    This paper presents a method for appearance-based 3D head pose tracking utilizing optical flow computation. The task is to recover the head pose parameters for extreme head pose angles based on 2D images. A novel method is presented that enables a robust recovery of the full motion by employing a motion-dependent regulatory term within the optical flow algorithm. Thereby, the rigid motion parameters are coupled directly with a regulatory term in the image alignment method affecting translation and rotation independently. The ill-conditioned, nonlinear optimization problem is stabilized by the proposed regulatory term yielding suitable conditioning of the Hessian matrix. It is shown that the regularization corresponding to the motion parameters can be extended to full 3D motion consisting of six parameters. Experiments on the Boston University head pose dataset demonstrate the enhancement of robustness in head pose estimation compared to conventional regularization methods. Using well-defined values for the regulatory parameters, the proposed method shows significant improvement in headtracking scenarios in terms of accuracy compared to existing methods.

  18. Relative pose estimation of satellites using PMD-/CCD-sensor data fusion

    NASA Astrophysics Data System (ADS)

    Tzschichholz, Tristan; Boge, Toralf; Schilling, Klaus

    2015-04-01

    Rendezvous & Docking to passive objects, as of relevance for space debris removal, raises new challenges with respect to relative navigation. Whenever the position and orientation (pose) of an object is required in terrestrial and in space applications, sensor systems such as laser scanners and stereo vision systems are often employed. This paper presents an approach to pose estimation using a 3D time-of-flight camera for ranging information in combination with a high resolution grayscale camera. We have designed a pose estimation method that fuses the data streams of the two sensors in order to benefit from each sensors' advantages. A rigorous test campaign on a Real-Time Hardware-In-The-Loop Rendezvous and Docking Simulator - the European Proximity Operations Simulator (EPOS) - was performed in order to evaluate the performance of the resulting algorithm. The proposed pose estimation method does not exceed an average distance error of 3 cm while being capable of providing pose estimates at up to 60 FPS on recent hardware. Thus, when regarding proximity operations, an attractive sensor system is used to characterize the dynamics of the target object for safe approach results.

  19. Marker detection evaluation by phantom and cadaver experiments for C-arm pose estimation pattern

    NASA Astrophysics Data System (ADS)

    Steger, Teena; Hoßbach, Martin; Wesarg, Stefan

    2013-03-01

    C-arm fluoroscopy is used for guidance during several clinical exams, e.g. in bronchoscopy to locate the bronchoscope inside the airways. Unfortunately, these images provide only 2D information. However, if the C-arm pose is known, it can be used to overlay the intrainterventional fluoroscopy images with 3D visualizations of airways, acquired from preinterventional CT images. Thus, the physician's view is enhanced and localization of the instrument at the correct position inside the bronchial tree is facilitated. We present a novel method for C-arm pose estimation introducing a marker-based pattern, which is placed on the patient table. The steel markers form a pattern, allowing to deduce the C-arm pose by use of the projective invariant cross-ratio. Simulations show that the C-arm pose estimation is reliable and accurate for translations inside an imaging area of 30 cm x 50 cm and rotations up to 30°. Mean error values are 0.33 mm in 3D space and 0.48 px in the 2D imaging plane. First tests on C-arm images resulted in similarly compelling accuracy values and high reliability in an imaging area of 30 cm x 42.5 cm. Even in the presence of interfering structures, tested both with anatomy phantoms and a turkey cadaver, high success rates over 90% and fully satisfying execution times below 4 sec for 1024 px × 1024 px images could be achieved.

  20. Pose estimation for one-dimensional object with general motion

    NASA Astrophysics Data System (ADS)

    Liu, Jinbo; Song, Ge; Zhang, Xiaohu

    2014-11-01

    Our primary interest is in real-time one-dimensional object's pose estimation. In this paper, a method to estimate general motion one-dimensional object's pose, that is, the position and attitude parameters, using a single camera is proposed. Centroid-movement is necessarily continuous and orderly in temporal space, which means it follows at least approximately certain motion law in a short period of time. Therefore, the centroid trajectory in camera frame can be described as a combination of temporal polynomials. Two endpoints on one-dimensional object, A and B, at each time are projected on the corresponding image plane. With the relationship between A, B and centroid C, we can obtain a linear equation system related to the temporal polynomials' coefficients, in which the camera has been calibrated and the image coordinates of A and B are known. Then in the cases that object moves continuous in natural temporal space within the view of a stationary camera, the position of endpoints on the one-dimensional object can be located and also the attitude can be estimated using two end points. Moreover the position of any other point aligned on one-dimensional object can also be solved. Scene information is not needed in the proposed method. If the distance between the endpoints is not known, a scale factor between the object's real positions and the estimated results will exist. In order to improve the algorithm's performance from accuracy and robustness, we derive a pain of linear and optimal algorithms. Simulations' and experiments' results show that the method is valid and robust with respect to various Gaussian noise levels. The paper's work contributes to making self-calibration algorithms using one-dimensional objects applicable to practice. Furthermore, the method can also be used to estimate the pose and shape parameters of parallelogram, prism or cylinder objects.

  1. Shape recognition and pose estimation for mobile Augmented Reality.

    PubMed

    Hagbi, Nate; Bergig, Oriel; El-Sana, Jihad; Billinghurst, Mark

    2011-10-01

    Nestor is a real-time recognition and camera pose estimation system for planar shapes. The system allows shapes that carry contextual meanings for humans to be used as Augmented Reality (AR) tracking targets. The user can teach the system new shapes in real time. New shapes can be shown to the system frontally, or they can be automatically rectified according to previously learned shapes. Shapes can be automatically assigned virtual content by classification according to a shape class library. Nestor performs shape recognition by analyzing contour structures and generating projective-invariant signatures from their concavities. The concavities are further used to extract features for pose estimation and tracking. Pose refinement is carried out by minimizing the reprojection error between sample points on each image contour and its library counterpart. Sample points are matched by evolving an active contour in real time. Our experiments show that the system provides stable and accurate registration, and runs at interactive frame rates on a Nokia N95 mobile phone. PMID:21041876

  2. Rate-constrained 3D surface estimation from noise-corrupted multiview depth videos.

    PubMed

    Sun, Wenxiu; Cheung, Gene; Chou, Philip A; Florencio, Dinei; Zhang, Cha; Au, Oscar C

    2014-07-01

    Transmitting compactly represented geometry of a dynamic 3D scene from a sender can enable a multitude of imaging functionalities at a receiver, such as synthesis of virtual images at freely chosen viewpoints via depth-image-based rendering. While depth maps—projections of 3D geometry onto 2D image planes at chosen camera viewpoints-can nowadays be readily captured by inexpensive depth sensors, they are often corrupted by non-negligible acquisition noise. Given depth maps need to be denoised and compressed at the encoder for efficient network transmission to the decoder, in this paper, we consider the denoising and compression problems jointly, arguing that doing so will result in a better overall performance than the alternative of solving the two problems separately in two stages. Specifically, we formulate a rate-constrained estimation problem, where given a set of observed noise-corrupted depth maps, the most probable (maximum a posteriori (MAP)) 3D surface is sought within a search space of surfaces with representation size no larger than a prespecified rate constraint. Our rate-constrained MAP solution reduces to the conventional unconstrained MAP 3D surface reconstruction solution if the rate constraint is loose. To solve our posed rate-constrained estimation problem, we propose an iterative algorithm, where in each iteration the structure (object boundaries) and the texture (surfaces within the object boundaries) of the depth maps are optimized alternately. Using the MVC codec for compression of multiview depth video and MPEG free viewpoint video sequences as input, experimental results show that rate-constrained estimated 3D surfaces computed by our algorithm can reduce coding rate of depth maps by up to 32% compared with unconstrained estimated surfaces for the same quality of synthesized virtual views at the decoder. PMID:24876124

  3. Efficient intensity-based camera pose estimation in presence of depth

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Nestares, Oscar; Wu, Yi; Kozintsev, Igor; Haussecker, Horst

    2013-03-01

    The widespread success of Kinect enables users to acquire both image and depth information with satisfying accuracy at relatively low cost. We leverage the Kinect output to efficiently and accurately estimate the camera pose in presence of rotation, translation, or both. The applications of our algorithm are vast ranging from camera tracking, to 3D points clouds registration, and video stabilization. The state-of-the-art approach uses point correspondences for estimating the pose. More explicitly, it extracts point features from images, e.g., SURF or SIFT, and builds their descriptors, and matches features from different images to obtain point correspondences. However, while features-based approaches are widely used, they perform poorly in scenes lacking texture due to scarcity of features or in scenes with repetitive structure due to false correspondences. Our algorithm is intensity-based and requires neither point features' extraction, nor descriptors' generation/matching. Due to absence of depth, the intensity-based approach alone cannot handle camera translation. With Kinect capturing both image and depth frames, we extend the intensity-based algorithm to estimate the camera pose in case of both 3D rotation and translation. The results are quite promising.

  4. Robust feature tracking for endoscopic pose estimation and structure recovery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Krappe, S.; Röhl, S.; Bodenstedt, S.; Müller-Stich, B.; Dillmann, R.

    2013-03-01

    Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.

  5. Using glint to perform geometric signature prediction and pose estimation

    NASA Astrophysics Data System (ADS)

    Paulson, Christopher; Zelnio, Edmund; Gorham, LeRoy; Wu, Dapeng

    2012-05-01

    We consider two problems in this paper. The rst problem is to construct a dictionary of elements without using synthetic data or a subset of the data collection; the second problem is to estimate the orientation of the vehicle, independent of the elevation angle. These problems are important to the SAR community because it will alleviate the cost to create the dictionary and reduce the number of elements in the dictionary needed for classication. In order to accomplish these tasks, we utilize the glint phenomenology, which is usually viewed as a hindrance in most algorithms but is valuable information in our research. One way to capitalize on the glint information is to predict the location of the int by using geometry of the single and double bounce phenomenology. After qualitative examination of the results, we were able to deduce that the geometry information was sucient for accurately predicting the location of the glint. Another way that we exploited the glint characteristics was by using it to extract the angle feature which we will use to do the pose estimation. Using this technique we were able to predict the cardinal heading of the vehicle within +/-2° with 96:6% having 0° error. Now this research will have an impact on the classication of SAR images because the geometric prediction will reduce the cost and time to develop and maintain the database for SAR ATR systems and the pose estimation will reduce the computational time and improve accuracy of vehicle classication.

  6. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  7. Robust endoscopic pose estimation for intraoperative organ-mosaicking

    NASA Astrophysics Data System (ADS)

    Reichard, Daniel; Bodenstedt, Sebastian; Suwelack, Stefan; Wagner, Martin; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-03-01

    The number of minimally invasive procedures is growing every year. These procedures are highly complex and very demanding for the surgeons. It is therefore important to provide intraoperative assistance to alleviate these difficulties. For most computer-assistance systems, like visualizing target structures with augmented reality, a registration step is required to map preoperative data (e.g. CT images) to the ongoing intraoperative scene. Without additional hardware, the (stereo-) endoscope is the prime intraoperative data source and with it, stereo reconstruction methods can be used to obtain 3D models from target structures. To link reconstructed parts from different frames (mosaicking), the endoscope movement has to be known. In this paper, we present a camera tracking method that uses dense depth and feature registration which are combined with a Kalman Filter scheme. It provides a robust position estimation that shows promising results in ex vivo and in silico experiments.

  8. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    NASA Astrophysics Data System (ADS)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  9. Hand Pose Estimation by Fusion of Inertial and Magnetic Sensing Aided by a Permanent Magnet.

    PubMed

    Kortier, Henk G; Antonsson, Jacob; Schepers, H Martin; Gustafsson, Fredrik; Veltink, Peter H

    2015-09-01

    Tracking human body motions using inertial sensors has become a well-accepted method in ambulatory applications since the subject is not confined to a lab-bounded volume. However, a major drawback is the inability to estimate relative body positions over time because inertial sensor information only allows position tracking through strapdown integration, but does not provide any information about relative positions. In addition, strapdown integration inherently results in drift of the estimated position over time. We propose a novel method in which a permanent magnet combined with 3-D magnetometers and 3-D inertial sensors are used to estimate the global trunk orientation and relative pose of the hand with respect to the trunk. An Extended Kalman Filter is presented to fuse estimates obtained from inertial sensors with magnetic updates such that the position and orientation between the human hand and trunk as well as the global trunk orientation can be estimated robustly. This has been demonstrated in multiple experiments in which various hand tasks were performed. The most complex task in which simultaneous movements of both trunk and hand were performed resulted in an average rms position difference with an optical reference system of 19.7±2.2 mm whereas the relative trunk-hand and global trunk orientation error was 2.3±0.9 and 8.6±8.7 deg respectively. PMID:25222952

  10. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    NASA Astrophysics Data System (ADS)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  11. An asynchronous modulation/demodulation technique for robust identification of a target for 3-D pose determination

    SciTech Connect

    Ferrell, R.K.; Jatko, W.B.; Sitter, D.N. Jr.

    1996-03-01

    Engineers at Oak Ridge National Laboratory have been investigating the feasibility of computer-controlled docking in resupply missions, sponsored by the US Army. The goal of this program is to autonomously dock an articulating robotic boom with a special receiving port. A video camera mounted on the boom provides video images of the docking port to an image processing computer that calculates the position and orientation (pose) of the target relative to the camera. The control system can then move the boom into docking position. This paper describes a method of uniquely identifying and segmenting the receiving port from its background in a sequence of video images. An array of light-emitting diodes was installed to mark the vertices of the port. The markers have a fixed geometric pattern and are modulated at a fixed frequency. An asynchronous demodulation technique to segment flashing markers from an image of the port was developed and tested under laboratory conditions. The technique acquires a sequence of images and digitally processes them in the time domain to suppress all image features except the flashing markers. Pixels that vary at frequencies within the filter bandwidth are passed unattenuated, while variations outside the passband are suppressed. The image coordinates of the segmented markers are computed and then used to calculate the pose of the receiving port. The technique has been robust and reliable in a laboratory demonstration of autodocking.

  12. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    SciTech Connect

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysis sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.

  13. Multiple receptor conformation docking, dock pose clustering and 3D QSAR studies on human poly(ADP-ribose) polymerase-1 (PARP-1) inhibitors.

    PubMed

    Fatima, Sabiha; Jatavath, Mohan Babu; Bathini, Raju; Sivan, Sree Kanth; Manga, Vijjulatha

    2014-10-01

    Poly(ADP-ribose) polymerase-1 (PARP-1) functions as a DNA damage sensor and signaling molecule. It plays a vital role in the repair of DNA strand breaks induced by radiation and chemotherapeutic drugs; inhibitors of this enzyme have the potential to improve cancer chemotherapy or radiotherapy. Three-dimensional quantitative structure activity relationship (3D QSAR) models were developed using comparative molecular field analysis, comparative molecular similarity indices analysis and docking studies. A set of 88 molecules were docked into the active site of six X-ray crystal structures of poly(ADP-ribose)polymerase-1 (PARP-1), by a procedure called multiple receptor conformation docking (MRCD), in order to improve the 3D QSAR models through the analysis of binding conformations. The docked poses were clustered to obtain the best receptor binding conformation. These dock poses from clustering were used for 3D QSAR analysis. Based on MRCD and QSAR information, some key features have been identified that explain the observed variance in the activity. Two receptor-based QSAR models were generated; these models showed good internal and external statistical reliability that is evident from the [Formula: see text], [Formula: see text] and [Formula: see text]. The identified key features enabled us to design new PARP-1 inhibitors. PMID:25046176

  14. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  15. Integration of a Generalised Building Model Into the Pose Estimation of Uas Images

    NASA Astrophysics Data System (ADS)

    Unger, J.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    A hybrid bundle adjustment is presented that allows for the integration of a generalised building model into the pose estimation of image sequences. These images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between the buildings. The relation between the building model and the images is described by distances between the object coordinates of the tie points and building model planes. Relations are found by a simple 3D distance criterion and are modelled as fictitious observations in a Gauss-Markov adjustment. The coordinates of model vertices are part of the adjustment as directly observed unknowns which allows for changes in the model. Results of first experiments using a synthetic and a real image sequence demonstrate improvements of the image orientation in comparison to an adjustment without the building model, but also reveal limitations of the current state of the method.

  16. Distributed observers for pose estimation in the presence of inertial sensory soft faults.

    PubMed

    Sadeghzadeh-Nokhodberiz, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

    2014-07-01

    Distributed Particle-Kalman Filter based observers are designed in this paper for inertial sensors (gyroscope and accelerometer) soft faults (biases and drifts) and rigid body pose estimation. The observers fuse inertial sensors with Photogrammetric camera. Linear and angular accelerations as unknown inputs of velocity and attitude rate dynamics, respectively, along with sensory biases and drifts are modeled and augmented to the moving body state parameters. To reduce the complexity of the high dimensional and nonlinear model, the graph theoretic tearing technique (structural decomposition) is employed to decompose the system to smaller observable subsystems. Separate interacting observers are designed for the subsystems which are interacted through well-defined interfaces. Kalman Filters are employed for linear ones and a Modified Particle Filter for a nonlinear non-Gaussian subsystem which includes imperfect attitude rate dynamics is proposed. The main idea behind the proposed Modified Particle Filtering approach is to engage both system and measurement models in the particle generation process. Experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. PMID:24852356

  17. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  18. Real-time Human Pose and Shape Estimation for Virtual Try-On Using a Single Commodity Depth Camera.

    PubMed

    Ye, Mao; Wang, Huamin; Deng, Nianchen; Yang, Xubo; Yang, Ruigang

    2014-04-01

    We present a system that allows the user to virtually try on new clothes. It uses a single commodity depth camera to capture the user in 3D. Both the pose and the shape of the user are estimated with a novel real-time template-based approach that performs tracking and shape adaptation jointly. The result is then used to drive realistic cloth simulation, in which the synthesized clothes are overlayed on the input image. The main challenge is to handle missing data and pose ambiguities due to the monocular setup, which captures less than 50 percent of the full body. Our solution is to incorporate automatic shape adaptation and novel constraints in pose tracking. The effectiveness of our system is demonstrated with a number of examples. PMID:24650982

  19. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  20. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.

    PubMed

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  1. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  2. Foot Pose Estimation Using an Inertial Sensor Unit and Two Distance Sensors

    PubMed Central

    Duong, Pham Duy; Suh, Young Soo

    2015-01-01

    There are many inertial sensor-based foot pose estimation algorithms. In this paper, we present a methodology to improve the accuracy of foot pose estimation using two low-cost distance sensors (VL6180) in addition to an inertial sensor unit. The distance sensor is a time-of-flight range finder and can measure distance up to 20 cm. A Kalman filter with 21 states is proposed to estimate both the calibration parameter (relative pose of distance sensors with respect to the inertial sensor unit) and foot pose. Once the calibration parameter is obtained, a Kalman filter with nine states can be used to estimate foot pose. Through four activities (walking, dancing step, ball kicking, jumping), it is shown that the proposed algorithm significantly improves the vertical position estimation. PMID:26151205

  3. Foot Pose Estimation Using an Inertial Sensor Unit and Two Distance Sensors.

    PubMed

    Duong, Pham Duy; Suh, Young Soo

    2015-01-01

    There are many inertial sensor-based foot pose estimation algorithms. In this paper, we present a methodology to improve the accuracy of foot pose estimation using two low-cost distance sensors (VL6180) in addition to an inertial sensor unit. The distance sensor is a time-of-flight range finder and can measure distance up to 20 cm. A Kalman filter with 21 states is proposed to estimate both the calibration parameter (relative pose of distance sensors with respect to the inertial sensor unit) and foot pose. Once the calibration parameter is obtained, a Kalman filter with nine states can be used to estimate foot pose. Through four activities (walking, dancing step, ball kicking, jumping), it is shown that the proposed algorithm significantly improves the vertical position estimation. PMID:26151205

  4. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  5. Pose estimation with a Kinect for ergonomic studies: evaluation of the accuracy using a virtual mannequin.

    PubMed

    Plantard, Pierre; Auvinet, Edouard; Pierres, Anne-Sophie Le; Multon, Franck

    2015-01-01

    Analyzing human poses with a Kinect is a promising method to evaluate potentials risks of musculoskeletal disorders at workstations. In ecological situations, complex 3D poses and constraints imposed by the environment make it difficult to obtain reliable kinematic information. Thus, being able to predict the potential accuracy of the measurement for such complex 3D poses and sensor placements is challenging in classical experimental setups. To tackle this problem, we propose a new evaluation method based on a virtual mannequin. In this study, we apply this method to the evaluation of joint positions (shoulder, elbow, and wrist), joint angles (shoulder and elbow), and the corresponding RULA (a popular ergonomics assessment grid) upper-limb score for a large set of poses and sensor placements. Thanks to this evaluation method, more than 500,000 configurations have been automatically tested, which would be almost impossible to evaluate with classical protocols. The results show that the kinematic information obtained by the Kinect software is generally accurate enough to fill in ergonomic assessment grids. However inaccuracy strongly increases for some specific poses and sensor positions. Using this evaluation method enabled us to report configurations that could lead to these high inaccuracies. As a supplementary material, we provide a software tool to help designers to evaluate the expected accuracy of this sensor for a set of upper-limb configurations. Results obtained with the virtual mannequin are in accordance with those obtained from a real subject for a limited set of poses and sensor placements. PMID:25599426

  6. Pose estimation using linearized rotations and quaternion algebra

    NASA Astrophysics Data System (ADS)

    Barfoot, Timothy; Forbes, James R.; Furgale, Paul T.

    2011-01-01

    In this paper we revisit the topic of how to formulate error terms for estimation problems that involve rotational state variables. We present a first-principles linearization approach that yields multiplicative error terms for unit-length quaternion representations of rotations, as well as for canonical rotation matrices. Quaternion algebra is employed throughout our derivations. We show the utility of our approach through two examples: (i) linearizing a sun sensor measurement error term, and (ii) weighted-least-squares point-cloud alignment.

  7. Swimmer detection and pose estimation for continuous stroke-rate determination

    NASA Astrophysics Data System (ADS)

    Zecha, Dan; Greif, Thomas; Lienhart, Rainer

    2012-02-01

    In this work we propose a novel approach to automatically detect a swimmer and estimate his/her pose continuously in order to derive an estimate of his/her stroke rate given that we observe the swimmer from the side. We divide a swimming cycle of each stroke into several intervals. Each interval represents a pose of the stroke. We use specifically trained object detectors to detect each pose of a stroke within a video and count the number of occurrences per time unit of the most distinctive poses (so-called key poses) of a stroke to continuously infer the stroke rate. We extensively evaluate the overall performance and the influence of the selected poses for all swimming styles on a data set consisting of a variety of swimmers.

  8. Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness

    NASA Technical Reports Server (NTRS)

    Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.

    2009-01-01

    Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.

  9. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    NASA Astrophysics Data System (ADS)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  10. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.

    PubMed

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola

    2016-01-01

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements. PMID:26861333

  11. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    PubMed Central

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola

    2016-01-01

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333

  12. Estimating satellite pose and motion parameters using a novelty filter and neural net tracker

    NASA Technical Reports Server (NTRS)

    Lee, Andrew J.; Casasent, David; Vermeulen, Pieter; Barnard, Etienne

    1989-01-01

    A system for determining the position, orientation and motion of a satellite with respect to a robotic spacecraft using video data is advanced. This system utilizes two levels of pose and motion estimation: an initial system which provides coarse estimates of pose and motion, and a second system which uses the coarse estimates and further processing to provide finer pose and motion estimates. The present paper emphasizes the initial coarse pose and motion estimation sybsystem. This subsystem utilizes novelty detection and filtering for locating novel parts and a neural net tracker to track these parts over time. Results of using this system on a sequence of images of a spin stabilized satellite are presented.

  13. Estimation of daily dietary fluoride intake: 3-d food diary v. 2-d duplicate plate.

    PubMed

    Omid, N; Maguire, A; O'Hare, W T; Zohoori, F V

    2015-12-28

    The 3-d food diary method (3-d FD) or the 2-d duplicate plate (2-d DP) method have been used to measure dietary fluoride (F) intake by many studies. This study aimed to compare daily dietary F intake (DDFI) estimated by the 3-d FD and 2-d DP methods at group and individual levels. Dietary data for sixty-one healthy children aged 4-6 years were collected using 3-d FD and 2-d DP methods with a 1-week gap between each collection. Food diary data were analysed for F using the Weighed Intake Analysis Software Package, whereas duplicate diets were analysed by an acid diffusion method using an F ion-selective electrode. Paired t test and linear regression were used to compare dietary data at the group and individual levels, respectively. At the group level, mean DDFI was 0·025 (sd 0·016) and 0·028 (sd 0·013) mg/kg body weight (bw) per d estimated by 3-d FD and 2-d DP, respectively. No statistically significant difference (P=0·10) was observed in estimated DDFI by each method at the group level. At an individual level, the agreement in estimating F intake (mg/kg bw per d) using the 3-d FD method compared with the 2-d DP method was within ±0·011 (95 % CI 0·009, 0·013) mg/kg bw per d. At the group level, DDFI data obtained by either the 2-d DP method or the 3-d FD method can be replaced. At an individual level, the typical error and the narrow margin between optimal and excessive F intake suggested that the DDFI data obtained by one method cannot replace the dietary data estimated from the other method. PMID:26568435

  14. Estimating aquatic hazards posed by prescription pharmaceutical residues from municipal wastewater

    EPA Science Inventory

    Risks posed by pharmaceuticals in the environment are hard to estimate due to limited monitoring capacity and difficulty interpreting monitoring results. In order to partially address these issues, we suggest a method for prioritizing pharmaceuticals for monitoring, and a framewo...

  15. Incorporating structure from motion uncertainty into image-based pose estimation

    NASA Astrophysics Data System (ADS)

    Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen

    2015-05-01

    A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.

  16. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  17. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  18. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  19. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  20. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  1. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    PubMed

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  2. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  3. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  4. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  5. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  6. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  7. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  8. Real-Time Simultaneous Pose and Shape Estimation for Articulated Objects Using a Single Depth Camera.

    PubMed

    Ye, Mao; Shen, Yang; Du, Chao; Pan, Zhigeng; Yang, Ruigang

    2016-08-01

    In this paper we present a novel real-time algorithm for simultaneous pose and shape estimation for articulated objects, such as human beings and animals. The key of our pose estimation component is to embed the articulated deformation model with exponential-maps-based parametrization into a Gaussian Mixture Model. Benefiting from this probabilistic measurement model, our algorithm requires no explicit point correspondences as opposed to most existing methods. Consequently, our approach is less sensitive to local minimum and handles fast and complex motions well. Moreover, our novel shape adaptation algorithm based on the same probabilistic model automatically captures the shape of the subjects during the dynamic pose estimation process. The personalized shape model in turn improves the tracking accuracy. Furthermore, we propose novel approaches to use either a mesh model or a sphere-set model as the template for both pose and shape estimation under this unified framework. Extensive evaluations on publicly available data sets demonstrate that our method outperforms most state-of-the-art pose estimation algorithms with large margin, especially in the case of challenging motions. Furthermore, our shape estimation method achieves comparable accuracy with state of the arts, yet requires neither statistical shape model nor extra calibration procedure. Our algorithm is not only accurate but also fast, we have implemented the entire processing pipeline on GPU. It can achieve up to 60 frames per second on a middle-range graphics card. PMID:27116732

  9. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  10. Predicting 3D pose in partially overlapped X-ray images of knee prostheses using model-based Roentgen stereophotogrammetric analysis (RSA).

    PubMed

    Hsu, Chi-Pin; Lin, Shang-Chih; Shih, Kao-Shang; Huang, Chang-Hung; Lee, Chian-Her

    2014-12-01

    After total knee replacement, the model-based Roentgen stereophotogrammetric analysis (RSA) technique has been used to monitor the status of prosthetic wear, misalignment, and even failure. However, the overlap of the prosthetic outlines inevitably increases errors in the estimation of prosthetic poses due to the limited amount of available outlines. In the literature, quite a few studies have investigated the problems induced by the overlapped outlines, and manual adjustment is still the mainstream. This study proposes two methods to automate the image processing of overlapped outlines prior to the pose registration of prosthetic models. The outline-separated method defines the intersected points and segments the overlapped outlines. The feature-recognized method uses the point and line features of the remaining outlines to initiate registration. Overlap percentage is defined as the ratio of overlapped to non-overlapped outlines. The simulated images with five overlapping percentages are used to evaluate the robustness and accuracy of the proposed methods. Compared with non-overlapped images, overlapped images reduce the number of outlines available for model-based RSA calculation. The maximum and root mean square errors for a prosthetic outline are 0.35 and 0.04 mm, respectively. The mean translation and rotation errors are 0.11 mm and 0.18°, respectively. The errors of the model-based RSA results are increased when the overlap percentage is beyond about 9%. In conclusion, both outline-separated and feature-recognized methods can be seamlessly integrated to automate the calculation of rough registration. This can significantly increase the clinical practicability of the model-based RSA technique. PMID:25293422

  11. 3D position estimation using an artificial neural network for a continuous scintillator PET detector

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhu, W.; Cheng, X.; Li, D.

    2013-03-01

    Continuous crystal based PET detectors have features of simple design, low cost, good energy resolution and high detection efficiency. Through single-end readout of scintillation light, direct three-dimensional (3D) position estimation could be another advantage that the continuous crystal detector would have. In this paper, we propose to use artificial neural networks to simultaneously estimate the plane coordinate and DOI coordinate of incident γ photons with detected scintillation light. Using our experimental setup with an ‘8 + 8’ simplified signal readout scheme, the training data of perpendicular irradiation on the front surface and one side surface are obtained, and the plane (x, y) networks and DOI networks are trained and evaluated. The test results show that the artificial neural network for DOI estimation is as effective as for plane estimation. The performance of both estimators is presented by resolution and bias. Without bias correction, the resolution of the plane estimator is on average better than 2 mm and that of the DOI estimator is about 2 mm over the whole area of the detector. With bias correction, the resolution at the edge area for plane estimation or at the end of the block away from the readout PMT for DOI estimation becomes worse, as we expect. The comprehensive performance of the 3D positioning by a neural network is accessed by the experimental test data of oblique irradiations. To show the combined effect of the 3D positioning over the whole area of the detector, the 2D flood images of oblique irradiation are presented with and without bias correction.

  12. Effects of scatter on model parameter estimates in 3D PET studies of the human brain

    SciTech Connect

    Cherry, S.R.; Huang, S.C.

    1995-08-01

    Phantom measurements and simulated data were used to characterize the effects of scatter on 3D PET projection data, reconstructed images and model parameter estimates. Scatter distributions were estimated form studies of the 3D Hoffman brain phantom by the 2D/3D difference method. The total scatter fraction in the projection data was 40%, but reduces to 27% when only those counts within the boundary of the brain are considered. After reconstruction, the whole brain scatter fraction is 20%, averaging 10% in cortical gray matter, 21% in basal ganglia and 40% in white matter. The scatter contribution varies by almost a factor of two from the edge to the center of the brain due to the shape of the scatter distribution and the effects of attenuation correction. The effect of scatter on estimates of cerebral metabolic rate for glucose (CMRGI) and cerebral blood flow (CBF) is evaluated by simulating typical gray matter time activity curves (TAC`s) and adding a scatter component based on whole-brain activity. Both CMRGI and CBF change in a linear fashion with scatter fraction. Efforts of between 10 and 30% will typically result if 3D studies are not corrected for scatter. The authors also present results from a simple and fast scatter correction which fits a gaussian function to the scattered events outside the brain. This reduced the scatter fraction to <2% in a range of phantom studies with different activity distributions. Using this correction, quantitative errors in 3D PET studies of CMRGI and CBF can be reduced to well below 10%.

  13. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  14. Bone Pose Estimation in the Presence of Soft Tissue Artifact Using Triangular Cosserat Point Elements.

    PubMed

    Solav, Dana; Rubin, M B; Cereatti, Andrea; Camomilla, Valentina; Wolf, Alon

    2016-04-01

    Accurate estimation of the position and orientation (pose) of a bone from a cluster of skin markers is limited mostly by the relative motion between the bone and the markers, which is known as the soft tissue artifact (STA). This work presents a method, based on continuum mechanics, to describe the kinematics of a cluster affected by STA. The cluster is characterized by triangular cosserat point elements (TCPEs) defined by all combinations of three markers. The effects of the STA on the TCPEs are quantified using three parameters describing the strain in each TCPE and the relative rotation and translation between TCPEs. The method was evaluated using previously collected ex vivo kinematic data. Femur pose was estimated from 12 skin markers on the thigh, while its reference pose was measured using bone pins. Analysis revealed that instantaneous subsets of TCPEs exist which estimate bone position and orientation more accurately than the Procrustes Superimposition applied to the cluster of all markers. It has been shown that some of these parameters correlate well with femur pose errors, which suggests that they can be used to select, at each instant, subsets of TCPEs leading an improved estimation of the underlying bone pose. PMID:26194039

  15. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  16. Pose Estimation of Unmanned Aerial Vehicles Based on a Vision-Aided Multi-Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Abdi, G.; Samadzadegan, F.; Kurz, F.

    2016-06-01

    GNSS/IMU navigation systems offer low-cost and robust solution to navigate UAVs. Since redundant measurements greatly improve the reliability of navigation systems, extensive researches have been made to enhance the efficiency and robustness of GNSS/IMU by additional sensors. This paper presents a method for integrating reference data, images taken from UAVs, barometric height data and GNSS/IMU data to estimate accurate and reliable pose parameters of UAVs. We provide improved pose estimations by integrating multi-sensor observations in an EKF algorithm with IMU motion model. The implemented methodology has demonstrated to be very efficient and reliable for automatic pose estimation. The calculated position and attitude of the UAV especially when we removed the GNSS from the working cycle clearly indicate the ability of the purposed methodology.

  17. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method. PMID:21652284

  18. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  19. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  20. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  1. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  2. Detecting and estimating errors in 3D restoration methods using analog models.

    NASA Astrophysics Data System (ADS)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  3. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  4. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  5. Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.

    PubMed

    Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide

    2003-03-15

    Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks. PMID:12680675

  6. A New Pose Estimation Algorithm Using a Perspective-Ray-Based Scaled Orthographic Projection with Iteration

    PubMed Central

    Sun, Pengfei; Sun, Changku; Li, Wenqiang; Wang, Peng

    2015-01-01

    Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI), an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF) motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment. PMID:26197272

  7. 3D visualization and biovolume estimation of motile cells by digital holography

    NASA Astrophysics Data System (ADS)

    Merola, F.; Miccio, L.; Memmolo, P.; Di Caprio, G.; Coppola, G.; Netti, P.

    2014-05-01

    For the monitoring of biological samples, physical parameters such as size, shape and refractive index are of crucial importance. However, up to now the morphological in-vitro analysis of in-vitro cells has been limited to 2D analysis by classical optical microscopy such as phase-contrast or DIC. Here we show an approach that exploits the capability of optical tweezers to trap and put in self-rotation bovine spermatozoa flowing into a microfluidic channel. At same time, digital holographic microscopy allows to image the cell in phase-contrast modality for each different angular position, during the rotation. From the collected information about the cell's phase-contrast signature, we demonstrate that it is possible to reconstruct the 3D shape of the cell and estimate its volume. The method can open new pathways for rapid measurement of in-vitro cells volume in microfluidic lab-on-a-chip platform, thus having access to 3D shape of the object avoiding tomography microscopy, that is an overwhelmed and very complex approach for measuring 3D shape and biovolume estimation.

  8. Parametric estimation of 3D tubular structures for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Anderson, Pamela G.; Rosenberg, Elizabeth; Kilmer, Misha E.; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L.

    2013-01-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction. PMID:23411913

  9. 3D Porosity Estimation of the Nankai Trough Sediments from Core-log-seismic Integration

    NASA Astrophysics Data System (ADS)

    Park, J. O.

    2015-12-01

    The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake fault. Historic, great megathrust earthquakes with a recurrence interval of 100-200 yr have generated strong motion and large tsunamis along the Nankai Trough subduction zone. At the Nankai Trough margin, the Philippine Sea Plate (PSP) is being subducted beneath the Eurasian Plate to the northwest at a convergence rate ~4 cm/yr. The Shikoku Basin, the northern part of the PSP, is estimated to have opened between 25 and 15 Ma by backarc spreading of the Izu-Bonin arc. The >100-km-wide Nankai accretionary wedge, which has developed landward of the trench since the Miocene, mainly consists of offscraped and underplated materials from the trough-fill turbidites and the Shikoku Basin hemipelagic sediments. Particularly, physical properties of the incoming hemipelagic sediments may be critical for seismogenic behavior of the megathrust fault. We have carried out core-log-seismic integration (CLSI) to estimate 3D acoustic impedance and porosity for the incoming sediments in the Nankai Trough. For the CLSI, we used 3D seismic reflection data, P-wave velocity and density data obtained during IODP (Integrated Ocean Drilling Program) Expeditions 322 and 333. We computed acoustic impedance depth profiles for the IODP drilling sites from P-wave velocity and density data. We constructed seismic convolution models with the acoustic impedance profiles and a source wavelet which is extracted from the seismic data, adjusting the seismic models to observed seismic traces with inversion method. As a result, we obtained 3D acoustic impedance volume and then converted it to 3D porosity volume. In general, the 3D porosities show decrease with depth. We found a porosity anomaly zone with alteration of high and low porosities seaward of the trough axis. In this talk, we will show detailed 3D porosity of the incoming sediments, and present implications of the porosity anomaly zone for the

  10. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  11. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  12. A Multi-Task Learning Framework for Head Pose Estimation under Target Motion.

    PubMed

    Yan, Yan; Ricci, Elisa; Subramanian, Ramanathan; Liu, Gaowen; Lanz, Oswald; Sebe, Nicu

    2016-06-01

    Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the target's position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the target's walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings. PMID:26372209

  13. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  14. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  15. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy. PMID:27362636

  16. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan

    2016-04-01

    Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.

  17. Accuracy Evaluation for a Precise Indoor Multi-Camera Pose Estimation System

    NASA Astrophysics Data System (ADS)

    Götz, C.; Tuttas, S.; Hoegner, L.; Eder, K.; Stilla, U.

    2011-04-01

    Pose estimation is used for different applications like indoor positioning, simultaneous localization and mapping (SLAM), industrial measurement and robot calibration. For industrial applications several approaches dealing with the subject of pose estimation employ photogrammetric methods. Cameras which observe an object from a given point of view are utilized as well as cameras which are firmly mounted on the object that is to be oriented. Since it is not always possible to create an environment that the camera can observe the object, we concentrate on the latter option. A camera system shall be developed which is flexibly applicable in an indoor environment, and can cope with different occlusion situations, varying distances and density of reference marks. For this purpose in a first step a conception has been designed and a test scenario was created to evaluate different camera configurations and reference mark distributions. Both issues, the theoretical concept as well as the experimental setup are subject of this document.

  18. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  19. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  20. 3D global estimation and augmented reality visualization of intra-operative X-ray dose.

    PubMed

    Rodas, Nicolas Loy; Padoy, Nicolas

    2014-01-01

    The growing use of image-guided minimally-invasive surgical procedures is confronting clinicians and surgical staff with new radiation exposure risks from X-ray imaging devices. The accurate estimation of intra-operative radiation exposure can increase staff awareness of radiation exposure risks and enable the implementation of well-adapted safety measures. The current surgical practice of wearing a single dosimeter at chest level to measure radiation exposure does not provide a sufficiently accurate estimation of radiation absorption throughout the body. In this paper, we propose an approach that combines data from wireless dosimeters with the simulation of radiation propagation in order to provide a global radiation risk map in the area near the X-ray device. We use a multi-camera RGBD system to obtain a 3D point cloud reconstruction of the room. The positions of the table, C-arm and clinician are then used 1) to simulate the propagation of radiation in a real-world setup and 2) to overlay the resulting 3D risk-map onto the scene in an augmented reality manner. By using real-time wireless dosimeters in our system, we can both calibrate the simulation and validate its accuracy at specific locations in real-time. We demonstrate our system in an operating room equipped with a robotised X-ray imaging device and validate the radiation simulation on several X-ray acquisition setups. PMID:25333145

  1. Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.

    PubMed

    Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-12-01

    We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework. PMID:25826809

  2. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    NASA Astrophysics Data System (ADS)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  3. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  4. Pose and Motion Estimation Using Dual Quaternion-Based Extended Kalman Filtering

    SciTech Connect

    Goddard, J.S.; Abidi, M.A.

    1998-06-01

    A solution to the remote three-dimensional (3-D) measurement problem is presented for a dynamic system given a sequence of two-dimensional (2-D) intensity images of a moving object. The 3-D transformation is modeled as a nonlinear stochastic system with the state estimate providing the six-degree-of-freedom motion and position values as well as structure. The stochastic model uses the iterated extended Kalman filter (IEKF) as a nonlinear estimator and a screw representation of the 3-D transformation based on dual quaternions. Dual quaternions, whose elements are dual numbers, provide a means to represent both rotation and translation in a unified notation. Linear object features, represented as dual vectors, are transformed using the dual quaternion transformation and are then projected to linear features in the image plane. The method has been implemented and tested with both simulated and actual experimental data. Simulation results are provided, along with comparisons to a point-based IEKF method using rotation and translation, to show the relative advantages of this method. Experimental results from testing using a camera mounted on the end effector of a robot arm are also given.

  5. Pose and motion estimation using dual quaternion-based extended Kalman filtering

    NASA Astrophysics Data System (ADS)

    Goddard, J. S.; Abidi, Mongi A.

    1998-03-01

    A solution to the remote three-dimensional (3-D) measurement problem is presented for a dynamic system given a sequence of two-dimensional (2-D) intensity images of a moving object. The 3-D transformation is modeled as a nonlinear stochastic system with the state estimate providing the six-degree-of-freedom motion and position values as well as structure. The stochastic model uses the iterated extended Kalman filter (IEKF) as a nonlinear estimator and a screw representation of the 3-D transformation based on dual quaternions. Dual quaternions, whose elements are dual numbers, provide a means to represent both rotation and translation in a unified notation. Linear object features, represented as dual vectors, are transformed using the dual quaternion transformation and are then projected to linear features in the image plane. The method has been implemented and tested with both simulated and actual experimental data. Simulation results are provided, along with comparisons to a point-based IEKF method using rotation and translation, to show the relative advantages of this method. Experimental results from testing using a camera mounted on the end effector of a robot arm are also given.

  6. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and

  7. Automated Segmentation of the Right Ventricle in 3D Echocardiography: A Kalman Filter State Estimation Approach.

    PubMed

    Bersvendsen, Jorn; Orderud, Fredrik; Massey, Richard John; Fosså, Kristian; Gerard, Olivier; Urheim, Stig; Samset, Eigil

    2016-01-01

    As the right ventricle's (RV) role in cardiovascular diseases is being more widely recognized, interest in RV imaging, function and quantification is growing. However, there are currently few RV quantification methods for 3D echocardiography presented in the literature or commercially available. In this paper we propose an automated RV segmentation method for 3D echocardiographic images. We represent the RV geometry by a Doo-Sabin subdivision surface with deformation modes derived from a training set of manual segmentations. The segmentation is then represented as a state estimation problem and solved with an extended Kalman filter by combining the RV geometry with a motion model and edge detection. Validation was performed by comparing surface-surface distances, volumes and ejection fractions in 17 patients with aortic insufficiency between the proposed method, magnetic resonance imaging (MRI), and a manual echocardiographic reference. The algorithm was efficient with a mean computation time of 2.0 s. The mean absolute distances between the proposed and manual segmentations were 3.6 ± 0.7 mm. Good agreements of end diastolic volume, end systolic volume and ejection fraction with respect to MRI ( -26±24 mL , -16±26 mL and 0 ± 10%, respectively) and a manual echocardiographic reference (7 ± 30 mL, 13 ± 17 mL and -5±7% , respectively) were observed. PMID:26168434

  8. Estimation of foot pressure from human footprint depths using 3D scanner

    NASA Astrophysics Data System (ADS)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  9. 3D pre- versus post-season comparisons of surface and relative pose of the corpus callosum in contact sport athletes

    NASA Astrophysics Data System (ADS)

    Lao, Yi; Gajawelli, Niharika; Haas, Lauren; Wilkins, Bryce; Hwang, Darryl; Tsao, Sinchai; Wang, Yalin; Law, Meng; Leporé, Natasha

    2014-03-01

    Mild traumatic brain injury (MTBI) or concussive injury affects 1.7 million Americans annually, of which 300,000 are due to recreational activities and contact sports, such as football, rugby, and boxing[1]. Finding the neuroanatomical correlates of brain TBI non-invasively and precisely is crucial for diagnosis and prognosis. Several studies have shown the in influence of traumatic brain injury (TBI) on the integrity of brain WM [2-4]. The vast majority of these works focus on athletes with diagnosed concussions. However, in contact sports, athletes are subjected to repeated hits to the head throughout the season, and we hypothesize that these have an influence on white matter integrity. In particular, the corpus callosum (CC), as a small structure connecting the brain hemispheres, may be particularly affected by torques generated by collisions, even in the absence of full blown concussions. Here, we use a combined surface-based morphometry and relative pose analyses, applying on the point distribution model (PDM) of the CC, to investigate TBI related brain structural changes between 9 pre-season and 9 post-season contact sport athlete MRIs. All the data are fed into surface based morphometry analysis and relative pose analysis. The former looks at surface area and thickness changes between the two groups, while the latter consists of detecting the relative translation, rotation and scale between them.

  10. Joint azimuth and elevation localization estimates in 3D synthetic aperture radar scenarios

    NASA Astrophysics Data System (ADS)

    Pepin, Matthew

    2015-05-01

    The location of point scatterers in Synthetic Aperture Radar (SAR) data is exploited in several modern analyzes including persistent scatter tracking, terrain deformation, and object identification. The changes in scatterers over time (pulse-to-pulse including vibration and movement, or pass-to-pass including direct follow on, time of day, and season), can be used to draw more information about the data collection. Multiple pass and multiple antenna SAR scenarios have extended these analyzes to location in three dimensions. Either multiple passes at different elevation angles may be .own or an antenna array with an elevation baseline performs a single pass. Parametric spectral estimation in each dimension allows sub-pixel localization of point scatterers in some cases additionally exploiting the multiple samples in each cross dimension. The accuracy of parametric estimation is increased when several azimuth passes or elevations (snapshots) are summed to mitigate measurement noise. Inherent range curvature across the aperture however limits the accuracy in the range dimension to that attained from a single pulse. Unlike the stationary case where radar returns may be averaged the movement necessary to create the synthetic aperture is only approximately (to pixel level accuracy) removed to form SAR images. In parametric estimation increased accuracy is attained when two dimensions are used to jointly estimate locations. This paper involves jointly estimating azimuth and elevation to attain increased accuracy 3D location estimates. In this way the full 2D array of azimuth and elevation samples is used to obtain the maximum possible accuracy. In addition the independent dimension collection geometry requires choosing which dimension azimuth or elevation attains the highest accuracy while joint estimation increases accuracy in both dimensions. When maximum parametric estimation accuracy in azimuth is selected the standard interferometric SAR scenario results. When

  11. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  12. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  13. 3D viscosity maps for Greenland and effect on GRACE mass balance estimates

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Xu, Zheng

    2016-04-01

    The GRACE satellite mission measures mass loss of the Greenland ice sheet. To correct for glacial isostatic adjustment numerical models are used. Although generally found to be a small signal, the full range of possible GIA models has not been explored yet. In particular, low viscosities due to a wet mantle and high temperatures due to the nearby Iceland hotspot could have a significant effect on GIA gravity rates. The goal of this study is to present a range of possible viscosity maps, and investigate the effect on GRACE mass balance estimates. Viscosity is derived using flow laws for olivine. Mantle temperature is computed from global seismology models, based on temperature derivatives for different mantle compositions. An indication for grain sizes is obtained by xenolith findings at a few locations. We also investigate the weakening effect of the presence of melt. To calculate gravity rates, we use a finite-element GIA model with the 3D viscosity maps and the ICE-5G loading history. GRACE mass balances for mascons in Greenland are derived with a least-squares inversion, using separate constraints for the inland and coastal areas in Greenland. Biases in the least-squares inversion are corrected using scale factors estimated from a simulation based on a surface mass balance model (Xu et al., submitted to The Cryosphere). Model results show enhanced gravity rates in the west and south of Greenland with 3D viscosity maps, compared to GIA models with 1D viscosity. The effect on regional mass balance is up to 5 Gt/year. Regional low viscosity can make present-day gravity rates sensitivity to ice thickness changes in the last decades. Therefore, an improved ice loading history for these time scales is needed.

  14. A hierarchical Bayesian approach for earthquake location and data uncertainty estimation in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Arroucau, Pierre; Custódio, Susana

    2015-04-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  15. A Hierarchical Bayesian Approcah for Earthquake Location and Data Uncertainty Estimation in 3D Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Custodio, S.

    2014-12-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  16. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators

    PubMed Central

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-01-01

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented. PMID:26703603

  17. Correlation techniques as applied to pose estimation in space station docking

    NASA Astrophysics Data System (ADS)

    Rollins, John M.; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-08-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed.

  18. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-01-01

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented. PMID:26703603

  19. Correlation Techniques as Applied to Pose Estimation in Space Station Docking

    NASA Technical Reports Server (NTRS)

    Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-01-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.

  20. Comparison of different methods for gender estimation from face image of various poses

    NASA Astrophysics Data System (ADS)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  1. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    reliable results and resolution. Based on the sediment layers of the peat bog together with the generated 3D surface model the paleoenvironment, the largest paleowater level can be reconstructed and we can estimate the dimension of the landslide which created the basin of the peat bog.

  2. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands

    PubMed Central

    Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region’s population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  3. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands.

    PubMed

    Biljecki, Filip; Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region's population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  4. 3D pore-network analysis and permeability estimation of deformation bands hosted in carbonate grainstones.

    NASA Astrophysics Data System (ADS)

    Zambrano, Miller; Tondi, Emanuele; Mancini, Lucia; Trias, F. Xavier; Arzilli, Fabio; Lanzafame, Gabriele; Aibibula, Nijiati

    2016-04-01

    In porous rocks strain is commonly localized in narrow Deformation Bands (DBs), where the petrophysical properties are significantly modified with respect the pristine rock. As a consequence, DBs could have an important effect on production and development of porous reservoirs representing baffles zones or, in some cases, contribute to reservoir compartmentalization. Taking in consideration that the decrease of permeability within DBs is related to changes in the porous network properties (porosity, connectivity) and the pores morphology (size distribution, specific surface area), an accurate porous network characterization is useful for understanding both the effect of deformation banding on the porous network and their influence upon fluid flow through the deformed rocks. In this work, a 3D characterization of the microstructure and texture of DBs hosted in porous carbonate grainstones was obtained at the Elettra laboratory (Trieste, Italy) by using two different techniques: phase-contrast synchrotron radiation computed microtomography (micro-CT) and microfocus X-ray micro-CT. These techniques are suitable for addressing quantitative analysis of the porous network and implementing Computer Fluid Dynamics (CFD)experiments in porous rocks. Evaluated samples correspond to grainstones highly affected by DBs exposed in San Vito Lo Capo peninsula (Sicily, Italy), Favignana Island (Sicily, Italy) and Majella Mountain (Abruzzo, Italy). For the analysis, the data were segmented in two main components porous and solid phases. The properties of interest are porosity, connectivity, a grain and/or porous textural properties, in order to differentiate host rock and DBs in different zones. Permeability of DB and surrounding host rock were estimated by the implementation of CFD experiments, permeability results are validated by comparing with in situ measurements. In agreement with previous studies, the 3D image analysis and flow simulation indicate that DBs could be constitute

  5. Scoliosis corrective force estimation from the implanted rod deformation using 3D-FEM analysis

    PubMed Central

    2015-01-01

    Background Improvement of material property in spinal instrumentation has brought better deformity correction in scoliosis surgery in recent years. The increase of mechanical strength in instruments directly means the increase of force, which acts on bone-implant interface during scoliosis surgery. However, the actual correction force during the correction maneuver and safety margin of pull out force on each screw were not well known. In the present study, estimated corrective forces and pull out forces were analyzed using a novel method based on Finite Element Analysis (FEA). Methods Twenty adolescent idiopathic scoliosis patients (1 boy and 19 girls) who underwent reconstructive scoliosis surgery between June 2009 and Jun 2011 were included in this study. Scoliosis correction was performed with 6mm diameter titanium rod (Ti6Al7Nb) using the simultaneous double rod rotation technique (SDRRT) in all cases. The pre-maneuver and post-maneuver rod geometry was collected from intraoperative tracing and postoperative 3D-CT images, and 3D-FEA was performed with ANSYS. Cobb angle of major curve, correction rate and thoracic kyphosis were measured on X-ray images. Results Average age at surgery was 14.8, and average fusion length was 8.9 segments. Major curve was corrected from 63.1 to 18.1 degrees in average and correction rate was 71.4%. Rod geometry showed significant change on the concave side. Curvature of the rod on concave and convex sides decreased from 33.6 to 17.8 degrees, and from 25.9 to 23.8 degrees, respectively. Estimated pull out forces at apical vertebrae were 160.0N in the concave side screw and 35.6N in the convex side screw. Estimated push in force at LIV and UIV were 305.1N in the concave side screw and 86.4N in the convex side screw. Conclusions Corrective force during scoliosis surgery was demonstrated to be about four times greater in the concave side than in convex side. Averaged pull out and push in force fell below previously reported safety

  6. An Inertial and Optical Sensor Fusion Approach for Six Degree-of-Freedom Pose Estimation

    PubMed Central

    He, Changyu; Kazanzides, Peter; Sen, Hasan Tutkun; Kim, Sungmin; Liu, Yue

    2015-01-01

    Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions. PMID:26184191

  7. An Inertial and Optical Sensor Fusion Approach for Six Degree-of-Freedom Pose Estimation.

    PubMed

    He, Changyu; Kazanzides, Peter; Sen, Hasan Tutkun; Kim, Sungmin; Liu, Yue

    2015-01-01

    Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions. PMID:26184191

  8. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    PubMed

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms. PMID:26186775

  9. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  10. Estimation of Hydraulic Fracturing in the Earth Fill Dam by 3-D Analysis

    NASA Astrophysics Data System (ADS)

    Nishimura, Shin-Ichi

    It is necessary to calculate strength and strain for estimation of hydraulic fracturing in the earth fill dam, and to which the FEM is effective. 2-D analysis can produce good results to some extent if an embankment is linear and the plain strain condition can be set to the cross section. However, there may be some conditions not possible to express in the 2-D plain because the actual embankment of agricultural reservoirs is formed by straight and curved lines. Moreover, it may not be possible to precisely calculate strain in the direction of dam axis because the 2-D analysis in the cross section cannot take the shape in the vertical section into consideration. Therefore, we performed 3-D built up analysis targeting the actually-leaked agricultural reservoir to examine hazards of hydraulic fracturing based on the shape of an embankment and by rapid impoundment of water. It resulted in the occurrence of hydraulic fracturing to develop by water pressure due to the vertical cracks caused by tensile strain in the valley and refractive section of the foundation.

  11. Angle Estimation of Simultaneous Orthogonal Rotations from 3D Gyroscope Measurements

    PubMed Central

    Stančin, Sara; Tomažič, Sašo

    2011-01-01

    A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation. PMID:22164090

  12. Edge preserving motion estimation with occlusions correction for assisted 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Pohl, Petr; Sirotenko, Michael; Tolstaya, Ekaterina; Bucha, Victor

    2014-02-01

    In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.

  13. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage. PMID:26672045

  14. Digital holography as a method for 3D imaging and estimating the biovolume of motile cells.

    PubMed

    Merola, F; Miccio, L; Memmolo, P; Di Caprio, G; Galli, A; Puglisi, R; Balduzzi, D; Coppola, G; Netti, P; Ferraro, P

    2013-12-01

    Sperm morphology is regarded as a significant prognostic factor for fertilization, as abnormal sperm structure is one of the most common factors in male infertility. Furthermore, obtaining accurate morphological information is an important issue with strong implications in zoo-technical industries, for example to perform sorting of species X from species Y. A challenging step forward would be the availability of a fast, high-throughput and label-free system for the measurement of physical parameters and visualization of the 3D shape of such biological specimens. Here we show a quantitative imaging approach to estimate simply and quickly the biovolume of sperm cells, combining the optical tweezers technique with digital holography, in a single and integrated set-up for a biotechnology assay process on the lab-on-a-chip scale. This approach can open the way for fast and high-throughput analysis in label-free microfluidic based "cytofluorimeters" and prognostic examination based on sperm morphology, thus allowing advancements in reproductive science. PMID:24129638

  15. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  16. The spatial accuracy of cellular dose estimates obtained from 3D reconstructed serial tissue autoradiographs.

    PubMed

    Humm, J L; Macklis, R M; Lu, X Q; Yang, Y; Bump, K; Beresford, B; Chin, L M

    1995-01-01

    In order to better predict and understand the effects of radiopharmaceuticals used for therapy, it is necessary to determine more accurately the radiation absorbed dose to cells in tissue. Using thin-section autoradiography, the spatial distribution of sources relative to the cells can be obtained from a single section with micrometre resolution. By collecting and analysing serial sections, the 3D microscopic distribution of radionuclide relative to the cellular histology, and therefore the dose rate distribution, can be established. In this paper, a method of 3D reconstruction of serial sections is proposed, and measurements are reported of (i) the accuracy and reproducibility of quantitative autoradiography and (ii) the spatial precision with which tissue features from one section can be related to adjacent sections. Uncertainties in the activity determination for the specimen result from activity losses during tissue processing (4-11%), and the variation of grain count per unit activity between batches of serial sections (6-25%). Correlation of the section activity to grain count densities showed deviations ranging from 6-34%. The spatial alignment uncertainties were assessed using nylon fibre fiduciary markers incorporated into the tissue block, and compared to those for alignment based on internal tissue landmarks. The standard deviation for the variation in nylon fibre fiduciary alignment was measured to be 41 microns cm-1, compared to 69 microns cm-1 when internal tissue histology landmarks were used. In addition, tissue shrinkage during histological processing of up to 10% was observed. The implications of these measured activity and spatial distribution uncertainties upon the estimate of cellular dose rate distribution depends upon the range of the radiation emissions. For long-range beta particles, uncertainties in both the activity and spatial distribution translate linearly to the uncertainty in dose rate of < 15%. For short-range emitters (< 100

  17. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  18. Head Pose Estimation on Top of Haar-Like Face Detection: A Study Using the Kinect Sensor

    PubMed Central

    Saeed, Anwar; Al-Hamadi, Ayoub; Ghoneim, Ahmed

    2015-01-01

    Head pose estimation is a crucial initial task for human face analysis, which is employed in several computer vision systems, such as: facial expression recognition, head gesture recognition, yawn detection, etc. In this work, we propose a frame-based approach to estimate the head pose on top of the Viola and Jones (VJ) Haar-like face detector. Several appearance and depth-based feature types are employed for the pose estimation, where comparisons between them in terms of accuracy and speed are presented. It is clearly shown through this work that using the depth data, we improve the accuracy of the head pose estimation. Additionally, we can spot positive detections, faces in profile views detected by the frontal model, that are wrongly cropped due to background disturbances. We introduce a new depth-based feature descriptor that provides competitive estimation results with a lower computation time. Evaluation on a benchmark Kinect database shows that the histogram of oriented gradients and the developed depth-based features are more distinctive for the head pose estimation, where they compare favorably to the current state-of-the-art approaches. Using a concatenation of the aforementioned feature types, we achieved a head pose estimation with average errors not exceeding 5.1∘,4.6∘,4.2∘ for pitch, yaw and roll angles, respectively. PMID:26343651

  19. Head Pose Estimation on Top of Haar-Like Face Detection: A Study Using the Kinect Sensor.

    PubMed

    Saeed, Anwar; Al-Hamadi, Ayoub; Ghoneim, Ahmed

    2015-01-01

    Head pose estimation is a crucial initial task for human face analysis, which is employed in several computer vision systems, such as: facial expression recognition, head gesture recognition, yawn detection, etc. In this work, we propose a frame-based approach to estimate the head pose on top of the Viola and Jones (VJ) Haar-like face detector. Several appearance and depth-based feature types are employed for the pose estimation, where comparisons between them in terms of accuracy and speed are presented. It is clearly shown through this work that using the depth data, we improve the accuracy of the head pose estimation. Additionally, we can spot positive detections, faces in profile views detected by the frontal model, that are wrongly cropped due to background disturbances. We introduce a new depth-based feature descriptor that provides competitive estimation results with a lower computation time. Evaluation on a benchmark Kinect database shows that the histogram of oriented gradients and the developed depth-based features are more distinctive for the head pose estimation, where they compare favorably to the current state-of-the-art approaches. Using a concatenation of the aforementioned feature types, we achieved a head pose estimation with average errors not exceeding 5:1; 4:6; 4:2 for pitch, yaw and roll angles, respectively. PMID:26343651

  20. Unified structured learning for simultaneous human pose estimation and garment attribute classification.

    PubMed

    Shen, Jie; Liu, Guangcan; Chen, Jia; Fang, Yuqiang; Xie, Jianbin; Yu, Yong; Yan, Shuicheng

    2014-11-01

    In this paper, we utilize structured learning to simultaneously address two intertwined problems: 1) human pose estimation (HPE) and 2) garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications. Unlike previous works that usually handle the two problems separately, our approach aims to produce an optimal joint estimation for both HPE and GAC via a unified inference procedure. To this end, we adopt a preprocessing step to detect potential human parts from each image (i.e., a set of candidates) that allows us to have a manageable input space. In this way, the simultaneous inference of HPE and GAC is converted to a structured learning problem, where the inputs are the collections of candidate ensembles, outputs are the joint labels of human parts and garment attributes, and joint feature representation involves various cues such as pose-specific features, garment-specific features, and cross-task features that encode correlations between human parts and garment attributes. Furthermore, we explore the strong edge evidence around the potential human parts so as to derive more powerful representations for oriented human parts. Such evidences can be seamlessly integrated into our structured learning model as a kind of energy function, and the learning process could be performed by standard structured support vector machines algorithm. However, the joint structure of the two problems is a cyclic graph, which hinders efficient inference. To resolve this issue, we compute instead approximate optima using an iterative procedure, where in each iteration, the variables of one problem are fixed. In this way, satisfactory solutions can be efficiently computed by dynamic programming. Experimental results on two benchmark data sets show the state-of-the-art performance of our approach. PMID:25248181

  1. Unified Structured Learning for Simultaneous Human Pose Estimation and Garment Attribute Classification

    NASA Astrophysics Data System (ADS)

    Shen, Jie; Liu, Guangcan; Chen, Jia; Fang, Yuqiang; Xie, Jianbin; Yu, Yong; Yan, Shuicheng

    2014-11-01

    In this paper, we utilize structured learning to simultaneously address two intertwined problems: human pose estimation (HPE) and garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications. Unlike previous works that usually handle the two problems separately, our approach aims to produce a jointly optimal estimation for both HPE and GAC via a unified inference procedure. To this end, we adopt a preprocessing step to detect potential human parts from each image (i.e., a set of "candidates") that allows us to have a manageable input space. In this way, the simultaneous inference of HPE and GAC is converted to a structured learning problem, where the inputs are the collections of candidate ensembles, the outputs are the joint labels of human parts and garment attributes, and the joint feature representation involves various cues such as pose-specific features, garment-specific features, and cross-task features that encode correlations between human parts and garment attributes. Furthermore, we explore the "strong edge" evidence around the potential human parts so as to derive more powerful representations for oriented human parts. Such evidences can be seamlessly integrated into our structured learning model as a kind of energy function, and the learning process could be performed by standard structured Support Vector Machines (SVM) algorithm. However, the joint structure of the two problems is a cyclic graph, which hinders efficient inference. To resolve this issue, we compute instead approximate optima by using an iterative procedure, where in each iteration the variables of one problem are fixed. In this way, satisfactory solutions can be efficiently computed by dynamic programming. Experimental results on two benchmark datasets show the state-of-the-art performance of our approach.

  2. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  3. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies

    NASA Astrophysics Data System (ADS)

    Wake, Kanako; Varsier, Nadège; Watanabe, Soichi; Taki, Masao; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-10-01

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  4. Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.

    PubMed

    Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A

    2011-11-01

    Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification

  5. Coupling the 3D hydro-morphodynamic model Telemac-3D-sisyphe and seismic measurements to estimate bedload transport rates in a small gravel-bed river.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Krein, Andreas; Barrière, Julien

    2014-05-01

    Coupling the 3D hydro-morphodynamic model Telemac-3D-sisyphe and seismic measurements to estimate bedload transport rates in a small gravel-bed river. Renaud Hostache, Andreas Krein, Julien Barrière During flood events, amounts of river bed material are transported via bedload. This causes problems, like the silting of reservoirs or the disturbance of biological habitats. Some current bedload measuring techniques have limited possibilities for studies in high temporal resolutions. Optical systems are usually not applicable because of high turbidity due to concentrated suspended sediment transported. Sediment traps or bedload samplers yield only summative information on bedload transport with low temporal resolution. An alternative bedload measuring technique is the use of seismological systems installed next to the rivers. The potential advantages are observations in real time and under undisturbed conditions. The study area is a 120 m long reach of River Colpach (21.5 km2), a small gravel bed river in Northern Luxembourg. A combined approach of hydro-climatological observations, hydraulic measurements, sediment sampling, and seismological measurements is used in order to investigate bedload transport phenomena. Information derived from seismic measurements and results from a 3-dimensional hydro-morphodynamic model are exemplarily discussed for a November 2013 flood event. The 3-dimensional hydro-morphodynamic model is based on the Telemac hydroinformatic system. This allows for dynamically coupling a 3D hydrodynamic model (Telemac-3D) and a morphodynamic model (Sisyphe). The coupling is dynamic as these models exchange their information during simulations. This is a main advantage as it allows for taking into account the effects of the morphologic changes of the riverbed on the water hydrodynamic and the bedload processes. The coupled model has been calibrated using time series of gauged water depths and time series of bed material collected sequentially (after

  6. Estimation of uncertainties in geological 3D raster layer models as integral part of modelling procedures

    NASA Astrophysics Data System (ADS)

    Maljers, Denise; den Dulk, Maryke; ten Veen, Johan; Hummelman, Jan; Gunnink, Jan; van Gessel, Serge

    2016-04-01

    The Geological Survey of the Netherlands (GSN) develops and maintains subsurface models with regional to national coverage. These models are paramount for petroleum exploration in conventional reservoirs, for understanding the distribution of unconventional reservoirs, for mapping geothermal aquifers, for the potential to store carbon, or for groundwater- or aggregate resources. Depending on the application domain these models differ in depth range, scale, data used, modelling software and modelling technique. Depth uncertainty information is available for the Geological Survey's 3D raster layer models DGM Deep and DGM Shallow. These models cover different depth intervals and are constructed using different data types and different modelling software. Quantifying the uncertainty of geological models that are constructed using multiple data types as well as geological expert-knowledge is not straightforward. Examples of geological expert-knowledge are trend surfaces displaying the regional thickness trends of basin fills or steering points that are used to guide the pinching out of geological formations or the modelling of the complex stratal geometries associated with saltdomes and saltridges. This added a-priori knowledge, combined with the assumptions underlying kriging (normality and second-order stationarity), makes the kriging standard error an incorrect measure of uncertainty for our geological models. Therefore the methods described below were developed. For the DGM Deep model a workflow has been developed to assess uncertainty by combining precision (giving information on the reproducibility of the model results) and accuracy (reflecting the proximity of estimates to the true value). This was achieved by centering the resulting standard deviations around well-tied depths surfaces. The standard deviations are subsequently modified by three other possible error sources: data error, structural complexity and velocity model error. The uncertainty workflow

  7. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  8. Video reframing relying on panoramic estimation based on a 3D representation of the scene

    NASA Astrophysics Data System (ADS)

    de Simon, Agnes; Figue, Jean; Nicolas, Henri

    2000-05-01

    This paper describes a new method for creating mosaic images from an original video and for computing a new sequence modifying some camera parameters like image size, scale factor, view angle... A mosaic image is a representation of the full scene observed by a moving camera during its displacement. It provides a wide angle of view of the scene from a sequence of images shot with a narrow angle of view camera. This paper proposes a method to create a virtual sequence from a calibrated original video and a rough 3D model of the scene. A 3D relationship between original and virtual images gives pixel correspondent in different images for a same 3D point in model scene. To texture the model with natural textures obtained in the original sequence, a criterion based on constraints related to the temporal variations of the background and 3D geometric considerations is used. Finally, in the presented method, the textured 3D model is used to recompute a new sequence of image with possibly different point of view and camera aperture angle. The algorithm is being proven with virtual sequences and, obtained results are encouraging up to now.

  9. Joint tracking, pose estimation, and target recognition using HRRR and track data: new results

    NASA Astrophysics Data System (ADS)

    Zajic, Tim; Rago, Constantino; Mahler, Ronald P. S.; Huff, Melvyn; Noviskey, Michael J.

    2001-08-01

    The work presented here is a continuation of research first reported in Mahler, et. Al. Our goal is a generalization of Bayesian filtering and estimation theory to the problem of multisensor, multitarget, multi-evidence unified joint detection, tracking and target identification. Our earlier efforts were focused on integrating the Statistical Features algorithm with a Bayesian nonlinear filter, allowing simultaneous determination of target position, velocity, pose and type via maximum a posteriori estimation. In this paper we continue to address the problem of target classification based on high range resolution radar signatures. While we continue to consider feature based techniques, as in StaF and our earlier work, instead of considering the location and magnitude of peaks in a signature as our features, we consider three alternative features. The features arise from applying either a Wavelet Decomposition, Principal Component Analysis or Linear Discriminant Analysis to the signature. We discuss briefly also, in the wavelet decomposition setting, the challenge of assigning a measure of uncertainty with a classification decision.

  10. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  11. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  12. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  13. Precision estimation and imaging of normal and shear components of the 3D strain tensor in elastography.

    PubMed

    Konofagou, E E; Ophir, J

    2000-06-01

    In elastography we have previously developed a tracking and correction method that estimates the axial and lateral strain components along and perpendicular to the compressor/scanning axis following an externally applied compression. However, the resulting motion is a three-dimensional problem. Therefore, in order to fully describe this motion we need to consider a 3D model and estimate all three principal strain components, i.e. axial, lateral and elevational (out-of-plane), for a full 3D tensor description. Since motion is coupled in all three dimensions, the three motion components have to be decoupled prior to their estimation. In this paper, we describe a method that estimates and corrects motion in three dimensions, which is an extension of the 2D motion tracking and correction method discussed before. In a similar way as in the 2D motion estimation, and by assuming that ultrasonic frames are available in more than one parallel elevational plane, we used methods of interpolation and cross-correlation between elevationally displaced RF echo segments to estimate the elevational displacement and strain. In addition, the axial, lateral and elevational displacements were used to estimate all three shear strain components that, together with the normal strain estimates, fully describe the full 3D normal strain tensor resulting from the uniform compression. Results of this method from three-dimensional finite-element simulations are shown. PMID:10870710

  14. A hybrid antenna array design for 3-d direction of arrival estimation.

    PubMed

    Saqib, Najam-Us; Khan, Imdad

    2015-01-01

    A 3-D beam scanning antenna array design is proposed that gives a whole 3-D spherical coverage and also suitable for various radar and body-worn devices in the Body Area Networks applications. The Array Factor (AF) of the proposed antenna is derived and its various parameters like directivity, Half Power Beam Width (HPBW) and Side Lobe Level (SLL) are calculated by varying the size of the proposed antenna array. Simulations were carried out in MATLAB 2012b. The radiators are considered isotropic and hence mutual coupling effects are ignored. The proposed array shows a considerable improvement against the existing cylindrical and coaxial cylindrical arrays in terms of 3-D scanning, size, directivity, HPBW and SLL. PMID:25790103

  15. Estimation of Atmospheric Methane Surface Fluxes Using a Global 3-D Chemical Transport Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Prinn, R.

    2003-12-01

    Accurate determination of atmospheric methane surface fluxes is an important and challenging problem in global biogeochemical cycles. We use inverse modeling to estimate annual, seasonal, and interannual CH4 fluxes between 1996 and 2001. The fluxes include 7 time-varying seasonal (3 wetland, rice, and 3 biomass burning) and 3 steady aseasonal (animals/waste, coal, and gas) global processes. To simulate atmospheric methane, we use the 3-D chemical transport model MATCH driven by NCEP reanalyzed observed winds at a resolution of T42 ( ˜2.8° x 2.8° ) in the horizontal and 28 levels (1000 - 3 mb) in the vertical. By combining existing datasets of individual processes, we construct a reference emissions field that represents our prior guess of the total CH4 surface flux. For the methane sink, we use a prescribed, annually-repeating OH field scaled to fit methyl chloroform observations. MATCH is used to produce both the reference run from the reference emissions, and the time-dependent sensitivities that relate individual emission processes to observations. The observational data include CH4 time-series from ˜15 high-frequency (in-situ) and ˜50 low-frequency (flask) observing sites. Most of the high-frequency data, at a time resolution of 40-60 minutes, have not previously been used in global scale inversions. In the inversion, the high-frequency data generally have greater weight than the weekly flask data because they better define the observational monthly means. The Kalman Filter is used as the optimal inversion technique to solve for emissions between 1996-2001. At each step in the inversion, new monthly observations are utilized and new emissions estimates are produced. The optimized emissions represent deviations from the reference emissions that lead to a better fit to the observations. The seasonal processes are optimized for each month, and contain the methane seasonality and interannual variability. The aseasonal processes, which are less variable, are

  16. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  17. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    PubMed

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  18. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging

    PubMed Central

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  19. Estimating Hydraulic Conductivities in a Fractured Shale Formation from Pressure Pulse Testing and 3d Modeling

    NASA Astrophysics Data System (ADS)

    Courbet, C.; DICK, P.; Lefevre, M.; Wittebroodt, C.; Matray, J.; Barnichon, J.

    2013-12-01

    logging, porosity varies by a factor of 2.5 whilst hydraulic conductivity varies by 2 to 3 orders of magnitude. In addition, a 3D numerical reconstruction of the internal structure of the fault zone inferred from borehole imagery has been built to estimate the permeability tensor variations. First results indicate that hydraulic conductivity values calculated for this structure are 2 to 3 orders of magnitude above those measured in situ. Such high values are due to the imaging method that only takes in to account open fractures of simple geometry (sine waves). Even though improvements are needed to handle more complex geometry, outcomes are promising as the fault damaged zone clearly appears as the highest permeability zone, where stress analysis show that the actual stress state may favor tensile reopening of fractures. Using shale samples cored from the different internal structures of the fault zone, we aim now to characterize the advection and diffusion using laboratory petrophysical tests combined with radial and through-diffusion experiments.

  20. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  1. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  2. Real-time upper-body human pose estimation from depth data using Kalman filter for simulator

    NASA Astrophysics Data System (ADS)

    Lee, D.; Chi, S.; Park, C.; Yoon, H.; Kim, J.; Park, C. H.

    2014-08-01

    Recently, many studies show that an indoor horse riding exercise has a positive effect on promoting health and diet. However, if a rider has an incorrect posture, it will be the cause of back pain. In spite of this problem, there is only few research on analyzing rider's posture. Therefore, the purpose of this study is to estimate a rider pose from a depth image using the Asus's Xtion sensor in real time. In the experiments, we show the performance of our pose estimation algorithm in order to comparing the results between our joint estimation algorithm and ground truth data.

  3. Estimating elastic moduli of rocks from thin sections: Digital rock study of 3D properties from 2D images

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Mavko, Gary

    2016-03-01

    Estimation of elastic rock moduli using 2D plane strain computations from thin sections has several numerical and analytical advantages over using 3D rock images, including faster computation, smaller memory requirements, and the availability of cheap thin sections. These advantages, however, must be weighed against the estimation accuracy of 3D rock properties from thin sections. We present a new method for predicting elastic properties of natural rocks using thin sections. Our method is based on a simple power-law transform that correlates computed 2D thin section moduli and the corresponding 3D rock moduli. The validity of this transform is established using a dataset comprised of FEM-computed elastic moduli of rock samples from various geologic formations, including Fontainebleau sandstone, Berea sandstone, Bituminous sand, and Grossmont carbonate. We note that using the power-law transform with a power-law coefficient between 0.4-0.6 contains 2D moduli to 3D moduli transformations for all rocks that are considered in this study. We also find that reliable estimates of P-wave (Vp) and S-wave velocity (Vs) trends can be obtained using 2D thin sections.

  4. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  5. Estimation of gold potentials using 3D restoration modeling, Mount Pleasant Area, Western Australia

    NASA Astrophysics Data System (ADS)

    Mejia-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2015-04-01

    A broad variety of gold-deposits are related to fault systems developed during a deformation event. Such discontinuities control the metals transport and allow the relatively high permeability necessary for the metals accumulation during the ore-deposits formation. However, some gold deposits formed during the same deformation event occur at locations far from the main faults. In those cases, the fracture systems are related with the rock heterogeneity that partially controls the damage development on the rock mass. A geo-mechanical 3D restoration modeling approach was used to simulate the strain developed during a stretching episode occurred in the Mount Pleasant region, Western Australia. Firstly a 3D solid-model was created from geological maps and interpreted structural cross-sections available on the studied region. The backward model was obtained flattening a stretching-representative reference surface selected from the lithology sequence. The deformation modeling was carried out on a 3D model built on Gocad/Skua and restored using a full geo-mechanical modeling based on a finite element method used to compute the volume restoration in a 600 m tetrahedral-mesh-resolution solid. The 3D structural restoration of the region was performed flattening surfaces using a flexural slip deformation style. Results show how the rock heterogeneity allows damages in locations far from the fault systems. The distant off-fault damage areas are located preferentially in lithological contacts and also follow the deformation trend of the region. Using a logistic regression method, it is shown that off-fault zones with high gold occurrences correlate spatially on locations with locally-high-gradient first deformational parameter, obtained from the restoration strain field. This contribution may provide some explanation for the presence of gold accumulations away from main fault systems, and the method could be used for inferring favorable areas in exploration surveys.

  6. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    NASA Astrophysics Data System (ADS)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  7. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing. PMID:25088006

  8. Atmospheric Nitrogen Trifluoride: Optimized emission estimates using 2-D and 3-D Chemical Transport Models from 1973-2008

    NASA Astrophysics Data System (ADS)

    Ivy, D. J.; Rigby, M. L.; Prinn, R. G.; Muhle, J.; Weiss, R. F.

    2009-12-01

    We present optimized annual global emissions from 1973-2008 of nitrogen trifluoride (NF3), a powerful greenhouse gas which is not currently regulated by the Kyoto Protocol. In the past few decades, NF3 production has dramatically increased due to its usage in the semiconductor industry. Emissions were estimated through the 'pulse-method' discrete Kalman filter using both a simple, flexible 2-D 12-box model used in the Advanced Global Atmospheric Gases Experiment (AGAGE) network and the Model for Ozone and Related Tracers (MOZART v4.5), a full 3-D atmospheric chemistry model. No official audited reports of industrial NF3 emissions are available, and with limited information on production, a priori emissions were estimated using both a bottom-up and top-down approach with two different spatial patterns based on semiconductor perfluorocarbon (PFC) emissions from the Emission Database for Global Atmospheric Research (EDGAR v3.2) and Semiconductor Industry Association sales information. Both spatial patterns used in the models gave consistent results, showing the robustness of the estimated global emissions. Differences between estimates using the 2-D and 3-D models can be attributed to transport rates and resolution differences. Additionally, new NF3 industry production and market information is presented. Emission estimates from both the 2-D and 3-D models suggest that either the assumed industry release rate of NF3 or industry production information is still underestimated.

  9. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System.

    PubMed

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R² = 0.98) and 0.57 mm (R² = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  10. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  11. Relative pose estimation of a lander using crater detection and matching

    NASA Astrophysics Data System (ADS)

    Lu, Tingting; Hu, Weiduo; Liu, Chang; Yang, Daguang

    2016-02-01

    Future space exploration missions require precise information about the lander pose during the descent and landing steps. An effective algorithm that utilizes crater detection and matching is presented to determine the lander pose with respect to the planetary surface. First, the projections of the crater circular rims in the descent image are detected and fitted into ellipses based on the geometric distance and coplanar circles constraint. Second, the detected craters are metrically rectified through a two-dimensional homography and matched with the crater database by similarity transformation. Finally, the lander pose is calculated by a norm-based optimization method. The algorithm is tested by synthetic and real trials. The experimental results show that our presented algorithm can determine the lander pose accurately and robustly.

  12. Selecting best-fit models for estimating the body mass from 3D data of the human calcaneus.

    PubMed

    Jung, Go-Un; Lee, U-Young; Kim, Dong-Ho; Kwak, Dai-Soon; Ahn, Yong-Woo; Han, Seung-Ho; Kim, Yi-Suk

    2016-05-01

    Body mass (BM) estimation could facilitate the interpretation of skeletal materials in terms of the individual's body size and physique in forensic anthropology. However, few metric studies have tried to estimate BM by focusing on prominent biomechanical properties of the calcaneus. The purpose of this study was to prepare best-fit models for estimating BM from the 3D human calcaneus by two major linear regression analysis (the heuristic statistical and all-possible-regressions techniques) and validate the models through predicted residual sum of squares (PRESS) statistics. A metric analysis was conducted based on 70 human calcaneus samples (29 males and 41 females) taken from 3D models in the Digital Korean Database and 10 variables were measured for each sample. Three best-fit models were postulated by F-statistics, Mallows' Cp, and Akaike information criterion (AIC) and Bayes information criterion (BIC) for each available candidate models. Finally, the most accurate regression model yields lowest %SEE and 0.843 of R(2). Through the application of leave-one-out cross validation, the predictive power was indicated a high level of validation accuracy. This study also confirms that the equations for estimating BM using 3D models of human calcaneus will be helpful to establish identification in forensic cases with consistent reliability. PMID:26970867

  13. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  14. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2001-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  15. Rigid and non-rigid geometrical transformations of a marker-cluster and their impact on bone-pose estimation.

    PubMed

    Bonci, T; Camomilla, V; Dumas, R; Chèze, L; Cappozzo, A

    2015-11-26

    When stereophotogrammetry and skin-markers are used, bone-pose estimation is jeopardised by the soft tissue artefact (STA). At marker-cluster level, this can be represented using a modal series of rigid (RT; translation and rotation) and non-rigid (NRT; homothety and scaling) geometrical transformations. The NRT has been found to be smaller than the RT and claimed to have a limited impact on bone-pose estimation. This study aims to investigate this matter and comparatively assessing the propagation of both STA components to bone-pose estimate, using different numbers of markers. Twelve skin-markers distributed over the anterior aspect of a thigh were considered and STA time functions were generated for each of them, as plausibly occurs during walking, using an ad hoc model and represented through the geometrical transformations. Using marker-clusters made of four to 12 markers affected by these STAs, and a Procrustes superimposition approach, bone-pose and the relevant accuracy were estimated. This was done also for a selected four marker-cluster affected by STAs randomly simulated by modifying the original STA NRT component, so that its energy fell in the range 30-90% of total STA energy. The pose error, which slightly decreased while increasing the number of markers in the marker-cluster, was independent from the NRT amplitude, and was always null when the RT component was removed. It was thus demonstrated that only the RT component impacts pose estimation accuracy and should thus be accounted for when designing algorithms aimed at compensating for STA. PMID:26555716

  16. Effect of GIA models with 3D composite mantle viscosity on GRACE mass balance estimates for Antarctica

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Whitehouse, Pippa L.; Schrama, Ernst J. O.

    2015-03-01

    Seismic data indicate that there are large viscosity variations in the mantle beneath Antarctica. Consideration of such variations would affect predictions of models of Glacial Isostatic Adjustment (GIA), which are used to correct satellite measurements of ice mass change. However, most GIA models used for that purpose have assumed the mantle to be uniformly stratified in terms of viscosity. The goal of this study is to estimate the effect of lateral variations in viscosity on Antarctic mass balance estimates derived from the Gravity Recovery and Climate Experiment (GRACE) data. To this end, recently-developed global GIA models based on lateral variations in mantle temperature are tuned to fit constraints in the northern hemisphere and then compared to GPS-derived uplift rates in Antarctica. We find that these models can provide a better fit to GPS uplift rates in Antarctica than existing GIA models with a radially-varying (1D) rheology. When 3D viscosity models in combination with specific ice loading histories are used to correct GRACE measurements, mass loss in Antarctica is smaller than previously found for the same ice loading histories and their preferred 1D viscosity profiles. The variation in mass balance estimates arising from using different plausible realizations of 3D viscosity amounts to 20 Gt/yr for the ICE-5G ice model and 16 Gt/yr for the W12a ice model; these values are larger than the GRACE measurement error, but smaller than the variation arising from unknown ice history. While there exist 1D Earth models that can reproduce the total mass balance estimates derived using 3D Earth models, the spatial pattern of gravity rates can be significantly affected by 3D viscosity in a way that cannot be reproduced by GIA models with 1D viscosity. As an example, models with 1D viscosity always predict maximum gravity rates in the Ross Sea for the ICE-5G ice model, however, for one of the three preferred 3D models the maximum (for the same ice model) is found

  17. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  18. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  19. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  20. CO2 mass estimation visible in time-lapse 3D seismic data from a saline aquifer and uncertainties

    NASA Astrophysics Data System (ADS)

    Ivanova, A.; Lueth, S.; Bergmann, P.; Ivandic, M.

    2014-12-01

    At Ketzin (Germany) the first European onshore pilot scale project for geological storage of CO2 was initiated in 2004. This project is multidisciplinary and includes 3D time-lapse seismic monitoring. A 3D pre-injection seismic survey was acquired in 2005. Then CO2 injection into a sandstone saline aquifer started at a depth of 650 m in 2008. A 1st 3D seismic repeat survey was acquired in 2009 after 22 kilotons had been injected. The imaged CO2 signature was concentrated around the injection well (200-300 m). A 2nd 3D seismic repeat survey was acquired in 2012 after 61 kilotons had been injected. The imaged CO2 signature further extended (100-200 m). The injection was terminated in 2013. Totally 67 kilotons of CO2 were injected. Time-lapse seismic processing, petrophysical data and geophysical logging on CO2 saturation have allowed for an estimate of the amount of CO2 visible in the seismic data. This estimate is dependent upon a choice of a number of parameters and contains a number of uncertainties. The main uncertainties are following. The constant reservoir porosity and CO2 density used for the estimation are probably an over-simplification since the reservoir is quite heterogeneous. May be velocity dispersion is present in the Ketzin reservoir rocks, but we do not consider it to be large enough that it could affect the mass of CO2 in our estimation. There are only a small number of direct petrophysical observations, providing a weak statistical basis for the determination of seismic velocities based on CO2 saturation and we have assumed that the petrophysical experiments were carried out on samples that are representative for the average properties of the whole reservoir. Finally, the most of the time delay values in the both 3D seismic repeat surveys within the amplitude anomaly are near the noise level of 1-2 ms, however a change of 1 ms in the time delay affects significantly the mass estimate, thus the choice of the time-delay cutoff is crucial. In spite

  1. Hierarchical estimation of a dense deformation field for 3-D robust registration.

    PubMed

    Hellier, P; Barillot, C; Mémin, E; Pérez, P

    2001-05-01

    A new method for medical image registration is formulated as a minimization problem involving robust estimators. We propose an efficient hierarchical optimization framework which is both multiresolution and multigrid. An anatomical segmentation of the cortex is introduced in the adaptive partitioning of the volume on which the multigrid minimization is based. This allows to limit the estimation to the areas of interest, to accelerate the algorithm, and to refine the estimation in specified areas. At each stage of the hierarchical estimation, we refine current estimate by seeking a piecewise affine model for the incremental deformation field. The performance of this method is numerically evaluated on simulated data and its benefits and robustness are shown on a database of 18 magnetic resonance imaging scans of the head. PMID:11403198

  2. Building continental-scale 3D subsurface layers in the Digital Crust project: constrained interpolation and uncertainty estimation.

    NASA Astrophysics Data System (ADS)

    Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.

    2015-12-01

    The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.

  3. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  4. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  5. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  6. Leaf Area Index Estimation in Vineyards from Uav Hyperspectral Data, 2d Image Mosaics and 3d Canopy Surface Models

    NASA Astrophysics Data System (ADS)

    Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.

    2015-08-01

    The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.

  7. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  8. Far and proximity maneuvers of a constellation of service satellites and autonomous pose estimation of customer satellite using machine vision

    NASA Astrophysics Data System (ADS)

    Arantes, Gilberto, Jr.; Marconi Rocco, Evandro; da Fonseca, Ijar M.; Theil, Stephan

    2010-05-01

    Space robotics has a substantial interest in achieving on-orbit satellite servicing operations autonomously, e.g. rendezvous and docking/berthing (RVD) with customer and malfunctioning satellites. An on-orbit servicing vehicle requires the ability to estimate the position and attitude in situations whenever the targets are uncooperative. Such situation comes up when the target is damaged. In this context, this work presents a robust autonomous pose system applied to RVD missions. Our approach is based on computer vision, using a single camera and some previous knowledge of the target, i.e. the customer spacecraft. A rendezvous analysis mission tool for autonomous service satellite has been developed and presented, for far maneuvers, e.g. distance above 1 km from the target, and close maneuvers. The far operations consist of orbit transfer using the Lambert formulation. The close operations include the inspection phase (during which the pose estimation is computed) and the final approach phase. Our approach is based on the Lambert problem for far maneuvers and the Hill equations are used to simulate and analyze the approaching and final trajectory between target and chase during the last phase of the rendezvous operation. A method for optimally estimating the relative orientation and position between camera system and target is presented in detail. The target is modelled as an assembly of points. The pose of the target is represented by dual quaternion in order to develop a simple quadratic error function in such a way that the pose estimation task becomes a least square minimization problem. The problem of pose is solved and some methods of non-linear square optimization (Newton, Newton-Gauss, and Levenberg-Marquard) are compared and discussed in terms of accuracy and computational cost.

  9. Body mass estimations for Plateosaurus engelhardti using laser scanning and 3D reconstruction methods

    NASA Astrophysics Data System (ADS)

    Gunga, Hanns-Christian; Suthau, Tim; Bellmann, Anke; Friedrich, Andreas; Schwanebeck, Thomas; Stoinski, Stefan; Trippel, Tobias; Kirsch, Karl; Hellwich, Olaf

    2007-08-01

    Both body mass and surface area are factors determining the essence of any living organism. This should also hold true for an extinct organism such as a dinosaur. The present report discusses the use of a new 3D laser scanner method to establish body masses and surface areas of an Asian elephant (Zoological Museum of Copenhagen, Denmark) and of Plateosaurus engelhardti, a prosauropod from the Upper Triassic, exhibited at the Paleontological Museum in Tübingen (Germany). This method was used to study the effect that slight changes in body shape had on body mass for P. engelhardti. It was established that body volumes varied between 0.79 m3 (slim version) and 1.14 m3 (robust version), resulting in a presumable body mass of 630 and 912 kg, respectively. The total body surface areas ranged between 8.8 and 10.2 m2, of which, in both reconstructions of P. engelhardti, ˜33% account for the thorax area alone. The main difference between the two models is in the tail and hind limb reconstruction. The tail of the slim version has a surface area of 1.98 m2, whereas that of the robust version has a surface area of 2.73 m2. The body volumes calculated for the slim version were as follows: head 0.006 m3, neck 0.016 m3, fore limbs 0.020 m3, hind limbs 0.08 m3, thoracic cavity 0.533 m3, and tail 0.136 m3. For the robust model, the following volumes were established: 0.01 m3 head, neck 0.026 m3, fore limbs 0.025 m3, hind limbs 0.18 m3, thoracic cavity 0.616 m3, and finally, tail 0.28 m3. Based on these body volumes, scaling equations were used to assess the size that the organs of this extinct dinosaur have.

  10. Estimation of vocal fold plane in 3D CT images for diagnosis of vocal fold abnormalities.

    PubMed

    Hewavitharanage, Sajini; Gubbi, Jayavardhana; Thyagarajan, Dominic; Lau, Ken; Palaniswami, Marimuthu

    2015-01-01

    Vocal folds are the key body structures that are responsible for phonation and regulating air movement into and out of lungs. Various vocal fold disorders may seriously impact the quality of life. When diagnosing vocal fold disorders, CT of the neck is the commonly used imaging method. However, vocal folds do not align with the normal axial plane of a neck and the plane containing vocal cords and arytenoids does vary during phonation. It is therefore important to generate an algorithm for detecting the actual plane containing vocal folds. In this paper, we propose a method to automatically estimate the vocal fold plane using vertebral column and anterior commissure localization. Gray-level thresholding, connected component analysis, rule based segmentation and unsupervised k-means clustering were used in the proposed algorithm. The anterior commissure segmentation method achieved an accuracy of 85%, a good estimate of the expert assessment. PMID:26736949

  11. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  12. Estimation and 3-D modeling of seismic parameters for fluvial systems

    SciTech Connect

    Brown, R.L.; Levey, R.A.

    1994-12-31

    Borehole measurements of parameters related to seismic propagation (Vp, Vs, Qp and Qs) are seldom available at all the wells within an area of study. Well logs and other available data can be used along with certain results from laboratory measurements to predict seismic parameters at wells where these measurements are not available. Next, three dimensional interpolation techniques based upon geological constraints can then be used to estimate the spatial distribution of geophysical parameters within a given environment. The net product is a more realistic model of the distribution of geophysical parameters which can be used in the design of surface and borehole seismic methods for probing the reservoir.

  13. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  14. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  15. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  16. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  17. A computational model for estimating tumor margins in complementary tactile and 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Shamsil, Arefin; Escoto, Abelardo; Naish, Michael D.; Patel, Rajni V.

    2016-03-01

    Conventional surgical methods are effective for treating lung tumors; however, they impose high trauma and pain to patients. Minimally invasive surgery is a safer alternative as smaller incisions are required to reach the lung; however, it is challenging due to inadequate intraoperative tumor localization. To address this issue, a mechatronic palpation device was developed that incorporates tactile and ultrasound sensors capable of acquiring surface and cross-sectional images of palpated tissue. Initial work focused on tactile image segmentation and fusion of position-tracked tactile images, resulting in a reconstruction of the palpated surface to compute the spatial locations of underlying tumors. This paper presents a computational model capable of analyzing orthogonally-paired tactile and ultrasound images to compute the surface circumference and depth margins of a tumor. The framework also integrates an error compensation technique and an algebraic model to align all of the image pairs and to estimate the tumor depths within the tracked thickness of a palpated tissue. For validation, an ex vivo experimental study was conducted involving the complete palpation of 11 porcine liver tissues injected with iodine-agar tumors of varying sizes and shapes. The resulting tactile and ultrasound images were then processed using the proposed model to compute the tumor margins and compare them to fluoroscopy based physical measurements. The results show a good negative correlation (r = -0.783, p = 0.004) between the tumor surface margins and a good positive correlation (r = 0.743, p = 0.009) between the tumor depth margins.

  18. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  19. Landscape scale estimation of soil carbon stock using 3D modelling.

    PubMed

    Veronesi, F; Corstanje, R; Mayr, T

    2014-07-15

    Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. PMID:24636454

  20. Baseline Face Detection, Head Pose Estimation, and Coarse Direction Detection for Facial Data in the SHRP2 Naturalistic Driving Study

    SciTech Connect

    Paone, Jeffrey R; Bolme, David S; Ferrell, Regina Kay; Aykac, Deniz; Karnowski, Thomas Paul

    2015-01-01

    Keeping a driver focused on the road is one of the most critical steps in insuring the safe operation of a vehicle. The Strategic Highway Research Program 2 (SHRP2) has over 3,100 recorded videos of volunteer drivers during a period of 2 years. This extensive naturalistic driving study (NDS) contains over one million hours of video and associated data that could aid safety researchers in understanding where the driver s attention is focused. Manual analysis of this data is infeasible, therefore efforts are underway to develop automated feature extraction algorithms to process and characterize the data. The real-world nature, volume, and acquisition conditions are unmatched in the transportation community, but there are also challenges because the data has relatively low resolution, high compression rates, and differing illumination conditions. A smaller dataset, the head pose validation study, is available which used the same recording equipment as SHRP2 but is more easily accessible with less privacy constraints. In this work we report initial head pose accuracy using commercial and open source face pose estimation algorithms on the head pose validation data set.

  1. 3D Wind Reconstruction and Turbulence Estimation in the Boundary Layer from Doppler Lidar Measurements using Particle Method

    NASA Astrophysics Data System (ADS)

    Rottner, L.; Baehr, C.

    2014-12-01

    Turbulent phenomena in the atmospheric boundary layer (ABL) are characterized by small spatial and temporal scales which make them difficult to observe and to model.New remote sensing instruments, like Doppler Lidar, give access to fine and high-frequency observations of wind in the ABL. This study suggests to use a method of nonlinear estimation based on these observations to reconstruct 3D wind in a hemispheric volume, and to estimate atmospheric turbulent parameters. The wind observations are associated to particle systems which are driven by a local turbulence model. The particles have both fluid and stochastic properties. Therefore, spatial averages and covariances may be deduced from the particles. Among the innovative aspects, we point out the absence of the common hypothesis of stationary-ergodic turbulence and the non-use of particle model closure hypothesis. Every time observations are available, 3D wind is reconstructed and turbulent parameters such as turbulent kinectic energy, dissipation rate, and Turbulent Intensity (TI) are provided. This study presents some results obtained using real wind measurements provided by a five lines of sight Lidar. Compared with classical methods (e.g. eddy covariance) our technic renders equivalent long time results. Moreover it provides finer and real time turbulence estimations. To assess this new method, we suggest computing independently TI using different observation types. First anemometer data are used to have TI reference.Then raw and filtered Lidar observations have also been compared. The TI obtained from raw data is significantly higher than the reference one, whereas the TI estimated with the new algorithm has the same order.In this study we have presented a new class of algorithm to reconstruct local random media. It offers a new way to understand turbulence in the ABL, in both stable or convective conditions. Later, it could be used to refine turbulence parametrization in meteorological meso-scale models.

  2. Estimating porosity with ground-penetrating radar reflection tomography: A controlled 3-D experiment at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Bradford, John H.; Clement, William P.; Barrash, Warren

    2009-04-01

    To evaluate the uncertainty of water-saturated sediment velocity and porosity estimates derived from surface-based, ground-penetrating radar reflection tomography, we conducted a controlled field experiment at the Boise Hydrogeophysical Research Site (BHRS). The BHRS is an experimental well field located near Boise, Idaho. The experimental data set consisted of 3-D multioffset radar acquired on an orthogonal 20 × 30 m surface grid that encompassed a set of 13 boreholes. Experimental control included (1) 1-D vertical velocity functions determined from traveltime inversion of vertical radar profiles (VRP) and (2) neutron porosity logs. We estimated the porosity distribution in the saturated zone using both the Topp and Complex Refractive Index Method (CRIM) equations and found the CRIM estimates in better agreement with the neutron logs. We found that when averaged over the length of the borehole, surface-derived velocity measurements were within 5% of the VRP velocities and that the porosity differed from the neutron log by less than 0.05. The uncertainty, however, is scale dependent. We found that the standard deviation of differences between ground-penetrating-radar-derived and neutron-log-derived porosity values was as high as 0.06 at an averaging length of 0.25 m but decreased to less than 0.02 at length scale of 11 m. Additionally, we used the 3-D porosity distribution to identify a relatively high-porosity anomaly (i.e., local sedimentary body) within a lower-porosity unit and verified the presence of the anomaly using the neutron porosity logs. Since the reflection tomography approach requires only surface data, it can provide rapid assessment of bulk hydrologic properties, identify meter-scale anomalies of hydrologic significance, and may provide input for other higher-resolution measurement methods.

  3. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  4. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  5. Estimation of the environmental risk posed by landfills using chemical, microbiological and ecotoxicological testing of leachates.

    PubMed

    Matejczyk, Marek; Płaza, Grażyna A; Nałęcz-Jawecki, Grzegorz; Ulfig, Krzysztof; Markowska-Szczupak, Agata

    2011-02-01

    parameters of the landfill leachates should be analyzed together to assess the environmental risk posed by landfill emissions. PMID:21087786

  6. Distributed consensus on camera pose.

    PubMed

    Jorstad, Anne; DeMenthon, Daniel; Wang, I-Jeng; Burlina, Philippe

    2010-09-01

    Our work addresses pose estimation in a distributed camera framework. We examine how processing cameras can best reach a consensus about the pose of an object when they are each given a model of the object, defined by a set of point coordinates in the object frame of reference. The cameras can only see a subset of the object feature points in the midst of background clutter points, not knowing which image points match with which object points, nor which points are object points or background points. The cameras individually recover a prediction of the object's pose using their knowledge of the model, and then exchange information with their neighbors, performing consensus updates locally to obtain a single estimate consistent across all cameras, without requiring a common centralized processor. Our main contributions are: 1) we present a novel algorithm performing consensus updates in 3-D world coordinates penalized by a 3-D model, and 2) we perform a thorough comparison of our method with other current consensus methods. Our method is consistently the most accurate, and we confirm that the existing consensus method based upon calculating the Karcher mean of rotations is also reliable and fast. Experiments on simulated and real imagery are reported. PMID:20363678

  7. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    NASA Astrophysics Data System (ADS)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  8. Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-06-01

    Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.

  9. Geometrical pose and structural estimation from a single image for automatic inspection of filter components

    NASA Astrophysics Data System (ADS)

    Liu, Yonghuai; Rodrigues, Marcos A.

    2000-03-01

    This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.

  10. A hybrid 3D-Var data assimilation scheme for joint state and parameter estimation: application to morphodynamic modelling

    NASA Astrophysics Data System (ADS)

    Smith, P.; Nichols, N. K.; Dance, S.

    2011-12-01

    Data assimilation is typically used to provide initial conditions for state estimation; combining model predictions with observational data to produce an updated model state that most accurately characterises the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. However, even with perfect initial data, inaccurate representation of model parameters will lead to the growth of model error and therefore affect the ability of our model to accurately predict the true system state. A key question in model development is how to estimate parameters a priori. In most cases, parameter estimation is addressed as a separate issue to state estimation and model calibration is performed offline in a separate calculation. Here we demonstrate how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state as part of the assimilation process. We present a novel hybrid data assimilation algorithm developed for application to parameter estimation in morphodynamic models. The new approach is based on a computationally inexpensive 3D-Var scheme, where the specification of the covariance matrices is crucial for success. For combined state-parameter estimation, it is particularly important that the cross-covariances between the parameters and the state are given a good a priori specification. Early experiments indicated that in order to yield reliable estimates of the true parameters, a flow dependent representation of the state-parameter cross covariances is required. By combining ideas from 3D-Var and the extended Kalman filter we have developed a novel hybrid assimilation scheme that captures the flow dependent nature of the state-parameter cross covariances without the computational expense of explicitly propagating the full system covariance matrix. We will give details of the formulation of this

  11. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    NASA Astrophysics Data System (ADS)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  12. Application of unscented Kalman filter for robust pose estimation in image-guided surgery

    NASA Astrophysics Data System (ADS)

    Vaccarella, Alberto; De Momi, Elena; Valenti, Marta; Ferrigno, Giancarlo; Enquobahrie, Andinet

    2012-02-01

    Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid on top of preoperative images of the patient during surgery. The most commonly used localization systems in the Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However, OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues. This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.

  13. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  14. Estimation of water saturated permeability of soils, using 3D soil tomographic images and pore-level transport phenomena modelling

    NASA Astrophysics Data System (ADS)

    Lamorski, Krzysztof; Sławiński, Cezary; Barna, Gyöngyi

    2014-05-01

    There are some important macroscopic properties of the soil porous media such as: saturated permeability and water retention characteristics. These soil characteristics are very important as they determine soil transport processes and are commonly used as a parameters of general models of soil transport processes used extensively for scientific developments and engineering practise. These characteristics are usually measured or estimated using some statistical or phenomenological modelling, i.e. pedotransfer functions. On the physical basis, saturated soil permeability arises from physical transport processes occurring at the pore level. Current progress in modelling techniques, computational methods and X-ray micro-tomographic technology gives opportunity to use direct methods of physical modelling for pore level transport processes. Physically valid description of transport processes at micro-scale based on Navier-Stokes type modelling approach gives chance to recover macroscopic porous medium characteristics from micro-flow modelling. Water microflow transport processes occurring at the pore level are dependent on the microstructure of porous body and interactions between the fluid and the medium. In case of soils, i.e. the medium there exist relatively big pores in which water can move easily but also finer pores are present in which water transport processes are dominated by strong interactions between the medium and the fluid - full physical description of these phenomena is a challenge. Ten samples of different soils were scanned using X-ray computational microtomograph. The diameter of samples was 5 mm. The voxel resolution of CT scan was 2.5 µm. Resulting 3D soil samples images were used for reconstruction of the pore space for further modelling. 3D image threshholding was made to determine the soil grain surface. This surface was triangulated and used for computational mesh construction for the pore space. Numerical modelling of water flow through the

  15. Extended Kalman Filter-Based Methods for Pose Estimation Using Visual, Inertial and Magnetic Sensors: Comparative Analysis and Performance Evaluation

    PubMed Central

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2013-01-01

    In this paper measurements from a monocular vision system are fused with inertial/magnetic measurements from an Inertial Measurement Unit (IMU) rigidly connected to the camera. Two Extended Kalman filters (EKFs) were developed to estimate the pose of the IMU/camera sensor moving relative to a rigid scene (ego-motion), based on a set of fiducials. The two filters were identical as for the state equation and the measurement equations of the inertial/magnetic sensors. The DLT-based EKF exploited visual estimates of the ego-motion using a variant of the Direct Linear Transformation (DLT) method; the error-driven EKF exploited pseudo-measurements based on the projection errors from measured two-dimensional point features to the corresponding three-dimensional fiducials. The two filters were off-line analyzed in different experimental conditions and compared to a purely IMU-based EKF used for estimating the orientation of the IMU/camera sensor. The DLT-based EKF was more accurate than the error-driven EKF, less robust against loss of visual features, and equivalent in terms of computational complexity. Orientation root mean square errors (RMSEs) of 1° (1.5°), and position RMSEs of 3.5 mm (10 mm) were achieved in our experiments by the DLT-based EKF (error-driven EKF); by contrast, orientation RMSEs of 1.6° were achieved by the purely IMU-based EKF. PMID:23385409

  16. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  17. Protocol for Translabial 3D-Ultrasonography for diagnosing levator defects (TRUDIL): a multicentre cohort study for estimating the diagnostic accuracy of translabial 3D-ultrasonography of the pelvic floor as compared to MR imaging

    PubMed Central

    2011-01-01

    Background Pelvic organ prolapse (POP) is a condition affecting more than half of the women above age 40. The estimated lifetime risk of needing surgical management for POP is 11%. In patients undergoing POP surgery of the anterior vaginal wall, the re-operation rate is 30%. The recurrence risk is especially high in women with a levator ani defect. Such defect is present if there is a partially or completely detachment of the levator ani from the inferior ramus of the symphysis. Detecting levator ani defects is relevant for counseling, and probably also for treatment. Levator ani defects can be imaged with MRI and also with Translabial 3D ultrasonography of the pelvic floor. The primary aim of this study is to assess the diagnostic accuracy of translabial 3D ultrasonography for diagnosing levator defects in women with POP with Magnetic Resonance Imaging as the reference standard. Secondary goals of this study include quantification of the inter-observer agreement about levator ani defects and determining the association between levator defects and recurrent POP after anterior repair. In addition, the cost-effectiveness of adding translabial ultrasonography to the diagnostic work-up in patients with POP will be estimated in a decision analytic model. Methods/Design A multicentre cohort study will be performed in nine Dutch hospitals. 140 consecutive women with a POPQ stage 2 or more anterior vaginal wall prolapse, who are indicated for anterior colporapphy will be included. Patients undergoing additional prolapse procedures will also be included. Prior to surgery, patients will undergo MR imaging and translabial 3D ultrasound examination of the pelvic floor. Patients will be asked to complete validated disease specific quality of life questionnaires before surgery and at six and twelve months after surgery. Pelvic examination will be performed at the same time points. Assuming a sensitivity and specificity of 90% of 3D ultrasound for diagnosing levator defects in a

  18. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  19. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    , EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. Klokočník, J., Kostelecký, J., Eppelbaum, L. and Bezděk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and

  20. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  1. An Investigation on the Feasibility of Uncalibrated and Unconstrained Gaze Tracking for Human Assistive Applications by Using Head Pose Estimation

    PubMed Central

    Cazzato, Dario; Leo, Marco; Distante, Cosimo

    2014-01-01

    This paper investigates the possibility of accurately detecting and tracking human gaze by using an unconstrained and noninvasive approach based on the head pose information extracted by an RGB-D device. The main advantages of the proposed solution are that it can operate in a totally unconstrained environment, it does not require any initial calibration and it can work in real-time. These features make it suitable for being used to assist human in everyday life (e.g., remote device control) or in specific actions (e.g., rehabilitation), and in general in all those applications where it is not possible to ask for user cooperation (e.g., when users with neurological impairments are involved). To evaluate gaze estimation accuracy, the proposed approach has been largely tested and results are then compared with the leading methods in the state of the art, which, in general, make use of strong constraints on the people movements, invasive/additional hardware and supervised pattern recognition modules. Experimental tests demonstrated that, in most cases, the errors in gaze estimation are comparable to the state of the art methods, although it works without additional constraints, calibration and supervised learning. PMID:24824369

  2. Estimating 3D variation in active-layer thickness beneath arctic streams using ground-penetrating radar

    USGS Publications Warehouse

    Brosten, T.R.; Bradford, J.H.; McNamara, J.P.; Gooseff, M.N.; Zarnetske, J.P.; Bowden, W.B.; Johnston, M.E.

    2009-01-01

    We acquired three-dimensional (3D) ground-penetrating radar (GPR) data across three stream sites on the North Slope, AK, in August 2005, to investigate the dependence of thaw depth on channel morphology. Data were migrated with mean velocities derived from multi-offset GPR profiles collected across a stream section within each of the 3D survey areas. GPR data interpretations from the alluvial-lined stream site illustrate greater thaw depths beneath riffle and gravel bar features relative to neighboring pool features. The peat-lined stream sites indicate the opposite; greater thaw depths beneath pools and shallower thaw beneath the connecting runs. Results provide detailed 3D geometry of active-layer thaw depths that can support hydrological studies seeking to quantify transport and biogeochemical processes that occur within the hyporheic zone.

  3. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  4. Pose Estimation using 1D Fourier Transform and Euclidean Distance Matching of CAD Model and Inspected Model Part

    NASA Astrophysics Data System (ADS)

    Zulkoffli, Zuliani; Abu Bakar, Elmi

    2016-02-01

    This paper present pose estimation relation of CAD model object and Projection Real Object (PRI). Image sequence of PRI and CAD model rotate on z axis at 10 degree interval in simulation and real scene used in this experiment. All this image is go through preprocessing stage to rescale object size and image size and transform all the image into silhouette. Correlation of CAD and PRI image is going through in this stage. Magnitude spectrum shows a reliable value in range 0.99 to 1.00 and Phase spectrum correlation shows a fluctuate graph in range 0.56 - 0.97. Euclidean distance correlation graph for CAD and PRI shows 2 zone of similar value due to almost symmetrical object shape. Processing stage of retrieval inspected PRI image in CAD database was carried out using range phase spectrum and maximum magnitude spectrum value within ±10% tolerance. Additional processing stage of retrieval inspected PRI image using Euclidean distance within ±5% tolerance also carried out. Euclidean matching shows a reliable result compared to range phase spectrum and maximum magnitude spectrum value by sacrificing more than 5 times processing time.

  5. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset. PMID:23955796

  6. Mechanistic and quantitative studies of bystander response in 3D tissues for low-dose radiation risk estimations

    SciTech Connect

    Amundson, Sally A.

    2013-06-12

    We have used the MatTek 3-dimensional human skin model to study the gene expression response of a 3D model to low and high dose low LET radiation, and to study the radiation bystander effect as a function of distance from the site of irradiation with either alpha particles or low LET protons. We have found response pathways that appear to be specific for low dose exposures, that could not have been predicted from high dose studies. We also report the time and distance dependent expression of a large number of genes in bystander tissue. the bystander response in 3D tissues showed many similarities to that described previously in 2D cultured cells, but also showed some differences.

  7. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  8. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    It is well-known that the local seismic site effects may have a significant contribution to the intensity of damage and destruction (e.g., Hough et al., 1990; Regnier et al., 2000; Bonnefoy-Claudet et al., 2006; Haase et al., 2010). The thicknesses of sediments, which play a large role in amplification, usually are derived from seismic velocities. At the same time, thickness of sediments may be determined (or defined) on the basis of 3D combined gravity-magnetic modeling joined with available geological materials, seismic data and borehole section examination. Final result of such investigation is a 3D physical-geological model (PGM) reflecting main geological peculiarities of the area under study. Such a combined study needs in application of a reliable 3D mathematical algorithm of computation together with advanced methodology of 3D modeling. For this analysis the developed GSFC software was selected. The GSFC (Geological Space Field Calculation) program was developed for solving a direct 3-D gravity and magnetic prospecting problem under complex geological conditions (Khesin et al., 1996; Eppelbaum and Khesin, 2004). This program has been designed for computing the field of Δg (Bouguer, free-air or observed value anomalies), ΔZ, ΔX, ΔY , ΔT , as well as second derivatives of the gravitational potential under conditions of rugged relief and inclined magnetization. The geological space can be approximated by (1) three-dimensional, (2) semi-infinite bodies and (3) those infinite along the strike closed, L.H. non-closed, R.H. on-closed and open). Geological bodies are approximated by horizontal polygonal prisms. The program has the following main advantages (besides abovementioned ones): (1) Simultaneous computing of gravity and magnetic fields; (2) Description of the terrain relief by irregularly placed characteristic points; (3) Computation of the effect of the earth-air boundary by the method of selection directly in the process of interpretation; (4

  9. Single view-based 3D face reconstruction robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie

    2012-12-01

    State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.

  10. Age Estimation in Living Adults using 3D Volume Rendered CT Images of the Sternal Plastron and Lower Chest.

    PubMed

    Oldrini, Guillaume; Harter, Valentin; Witte, Yannick; Martrille, Laurent; Blum, Alain

    2016-01-01

    Age estimation is commonly of interest in a judicial context. In adults, it is less documented than in children. The aim of this study was to evaluate age estimation in adults using CT images of the sternal plastron with volume rendering technique (VRT). The evaluation criteria are derived from known methods used for age estimation and are applicable in living or dead subjects. The VRT images of 456 patients were analyzed. Two radiologists performed age estimation independently from an anterior view of the plastron. Interobserver agreement and correlation coefficients between each reader's classification and real age were calculated. The interobserver agreement was 0.86, and the correlation coefficients between readers classifications and real age classes were 0.60 and 0.65. Spearman correlation coefficients were, respectively, 0.89, 0.67, and 0.71. Analysis of the plastron using VRT allows age estimation in vivo quickly and with results similar than methods such as Iscan, Suchey-Brooks, and radiographs used to estimate the age of death. PMID:27092960

  11. The 2D versus 3D imaging trade-off: The impact of over- or under-estimating small throats for simulating permeability in porous media

    NASA Astrophysics Data System (ADS)

    Peters, C. A.; Crandell, L. E.; Um, W.; Jones, K. W.; Lindquist, W. B.

    2011-12-01

    Geochemical reactions in the subsurface can alter the porosity and permeability of a porous medium through mineral precipitation and dissolution. While effects on porosity are relatively well understood, changes in permeability are more difficult to estimate. In this work, pore-network modeling is used to estimate the permeability of a porous medium using pore and throat size distributions. These distributions can be determined from 2D Scanning Electron Microscopy (SEM) images of thin sections or from 3D X-ray Computed Tomography (CT) images of small cores. Each method has unique advantages as well as unique sources of error. 3D CT imaging has the advantage of reconstructing a 3D pore network without the inherent geometry-based biases of 2D images but is limited by resolutions around 1 μm. 2D SEM imaging has the advantage of higher resolution, and the ability to examine sub-grain scale variations in porosity and mineralogy, but is limited by the small size of the sample of pores that are quantified. A pore network model was created to estimate flow permeability in a sand-packed experimental column investigating reaction of sediments with caustic radioactive tank wastes in the context of the Hanford, WA site. Before, periodically during, and after reaction, 3D images of the porous medium in the column were produced using the X2B beam line facility at the National Synchrotron Light Source (NSLS) at Brookhaven National Lab. These images were interpreted using 3DMA-Rock to characterize the pore and throat size distributions. After completion of the experiment, the column was sectioned and imaged using 2D SEM in backscattered electron mode. The 2D images were interpreted using erosion-dilation to estimate the pore and throat size distributions. A bias correction was determined by comparison with the 3D image data. A special image processing method was developed to infer the pore space before reaction by digitally removing the precipitate. The different sets of pore

  12. Simultaneous estimation of size, radial and angular locations of a malignant tumor in a 3-D human breast - A numerical study.

    PubMed

    Das, Koushik; Mishra, Subhash C

    2015-08-01

    This article reports a numerical study pertaining to simultaneous estimation of size, radial location and angular location of a malignant tumor in a 3-D human breast. The breast skin surface temperature profile is specific to a tumor of specific size and location. The temperature profiles are always the Gaussian one, though their peak magnitudes and areas differ according to the size and location of the tumor. The temperature profiles are obtained by solving the Pennes bioheat equation using the finite element method based solver COMSOL 4.3a. With temperature profiles known, simultaneous estimation of size, radial location and angular location of the tumor is done using the curve fitting method. Effect of measurement errors is also included in the study. Estimations are accurate, and since in the inverse analysis, the curve fitting method does not require solution of the governing bioheat equation, the estimation is very fast. PMID:26267509

  13. Estimating the subsurface temperature of Hessen/Germany based on a GOCAD 3D structural model - a comparison of numerical and geostatistical approaches

    NASA Astrophysics Data System (ADS)

    Rühaak, W.; Bär, K.; Sass, I.

    2012-04-01

    Based on a 3D structural GOCAD model of the German federal state Hessen the subsurface temperature distribution is computed. Since subsurface temperature data for greater depth are typically sparse, two different approaches for estimating the spatial subsurface temperature distribution are tested. One approach is the numerical computation of a 3D purely conductive steady state temperature distribution. This numerical model is based on measured thermal conductivity data for all relevant geological units, together with heat flow measurements and surface temperatures. The model is calibrated using continuous temperature-logs. Here only conductive heat transfer is considered as data for convective heat transport at great depth are currently not available. The other approach is by 3D ordinary Kriging; applying a modified approach where the quality of the temperature measurements is taken into account. A difficult but important part here is to derive good variograms for the horizontal and vertical direction. The variograms give necessary information about the spatial dependence. Both approaches are compared and discussed. Differences are mainly related due to convective processes, which are reflected by the interpolation result, but not by the numerical model. Therefore, a comparison of the two results is a good way to obtain information about flow processes in such great depth. This way an improved understanding of this mid enthalpy geothermal reservoir (1000 - 6000 m) is possible. Future work will be the reduction of the small but - especially for depth up to approximately 1000 m - relevant paleoclimate signal.

  14. Forest Inventory Attribute Estimation Using Airborne Laser Scanning, Aerial Stereo Imagery, Radargrammetry and Interferometry-Finnish Experiences of the 3d Techniques

    NASA Astrophysics Data System (ADS)

    Holopainen, M.; Vastaranta, M.; Karjalainen, M.; Karila, K.; Kaasalainen, S.; Honkavaara, E.; Hyyppä, J.

    2015-03-01

    Three-dimensional (3D) remote sensing has enabled detailed mapping of terrain and vegetation heights. Consequently, forest inventory attributes are estimated more and more using point clouds and normalized surface models. In practical applications, mainly airborne laser scanning (ALS) has been used in forest resource mapping. The current status is that ALS-based forest inventories are widespread, and the popularity of ALS has also raised interest toward alternative 3D techniques, including airborne and spaceborne techniques. Point clouds can be generated using photogrammetry, radargrammetry and interferometry. Airborne stereo imagery can be used in deriving photogrammetric point clouds, as very-high-resolution synthetic aperture radar (SAR) data are used in radargrammetry and interferometry. ALS is capable of mapping both the terrain and tree heights in mixed forest conditions, which is an advantage over aerial images or SAR data. However, in many jurisdictions, a detailed ALS-based digital terrain model is already available, and that enables linking photogrammetric or SAR-derived heights to heights above the ground. In other words, in forest conditions, the height of single trees, height of the canopy and/or density of the canopy can be measured and used in estimation of forest inventory attributes. In this paper, first we review experiences of the use of digital stereo imagery and spaceborne SAR in estimation of forest inventory attributes in Finland, and we compare techniques to ALS. In addition, we aim to present new implications based on our experiences.

  15. Thermal infrared exploitation for 3D face reconstruction

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.

    2009-05-01

    Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.

  16. Simultaneous estimation of the 3-D soot temperature and volume fraction distributions in asymmetric flames using high-speed stereoscopic images.

    PubMed

    Huang, Qunxing; Wang, Fei; Yan, Jianhua; Chi, Yong

    2012-05-20

    An inverse radiation analysis using soot emission measured by a high-speed stereoscopic imaging system is described for simultaneous estimation of the 3-D soot temperature and volume fraction distributions in unsteady sooty flames. A new iterative reconstruction method taking self attenuation into account is developed based on the least squares minimum-residual algorithm. Numerical assessment and experimental measurement results of an ethylene/air diffusive flame show that the proposed method is efficient and capable of reconstructing the soot temperature and volume fraction distributions in unsteady flames. The accuracy is improved when self attenuation is considered. PMID:22614600

  17. 3d morphometric analysis of lunar impact craters: a tool for degradation estimates and interpretation of maria stratigraphy

    NASA Astrophysics Data System (ADS)

    Vivaldi, Valerio; Massironi, Matteo; Ninfo, Andrea; Cremonese, Gabriele

    2015-04-01

    In this study we have applied 3D morphometric analysis of impact craters on the Moon by means of high resolution DTMs derived from LROC (Lunar Reconnaissance Orbiter Camera) NAC (Narrow Angle Camera) (0.5 to 1.5 m/pixel). The objective is twofold: i) evaluating crater degradation and ii) exploring the potential of this approach for Maria stratigraphic interpretation. In relation to the first objective we have considered several craters with different diameters representative of the four classes of degradation being C1 the freshest and C4 the most degraded ones (Arthur et al., 1963; Wilhelms, 1987). DTMs of these craters were elaborated according to a multiscalar approach (Wood, 1996) by testing different ranges of kernel sizes (e.g. 15-35-50-75-100), in order to retrieve morphometric variables such as slope, curvatures and openness. In particular, curvatures were calculated along different planes (e.g. profile curvature and plan curvature) and used to characterize the different sectors of a crater (rim crest, floor, internal slope and related boundaries) enabling us to evaluate its degradation. The gradient of the internal slope of different craters representative of the four classes shows a decrease of the slope mean value from C1 to C4 in relation to crater age and diameter. Indeed degradation is influenced by gravitational processes (landslides, dry flows), as well as space weathering that induces both smoothing effects on the morphologies and infilling processes within the crater, with the main results of lowering and enlarging the rim crest, and shallowing the crater depth. As far as the stratigraphic application is concerned, morphometric analysis was applied to recognize morphologic features within some simple craters, in order to understand the stratigraphic relationships among different lava layers within Mare Serenitatis. A clear-cut rheological boundary at a depth of 200 m within the small fresh Linnè crater (diameter: 2.22 km), firstly hypothesized

  18. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  19. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  20. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  1. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system.

    PubMed

    Lee, J; Yun, G S; Lee, J E; Kim, M; Choi, M J; Lee, W; Park, H K; Domier, C W; Luhmann, N C; Sabbagh, S A; Park, Y S; Lee, S G; Bak, J G

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils. PMID:24985817

  2. 3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study

    NASA Astrophysics Data System (ADS)

    Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.

    2015-03-01

    Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.

  3. A fast 3D surface reconstruction and volume estimation method for grain storage based on priori model

    NASA Astrophysics Data System (ADS)

    Liang, Xian-hua; Sun, Wei-dong

    2011-06-01

    Inventory checking is one of the most significant parts for grain reserves, and plays a very important role on the macro-control of food and food security. Simple, fast and accurate method to obtain internal structure information and further to estimate the volume of the grain storage is needed. Here in our developed system, a special designed multi-site laser scanning system is used to acquire the range data clouds of the internal structure of the grain storage. However, due to the seriously uneven distribution of the range data, this data should firstly be preprocessed by an adaptive re-sampling method to reduce the data redundancy as well as noise. Then the range data is segmented and useful features, such as plane and cylinder information, are extracted. With these features a coarse registration between all of these single-site range data is done, and then an Iterative Closest Point (ICP) algorithm is carried out to achieve fine registration. Taking advantage of the structure of the grain storage being well defined and the types of them are limited, a fast automatic registration method based on the priori model is proposed to register the multi-sites range data more efficiently. Then after the integration of the multi-sites range data, the grain surface is finally reconstructed by a delaunay based algorithm and the grain volume is estimated by a numerical integration method. This proposed new method has been applied to two common types of grain storage, and experimental results shown this method is more effective and accurate, and it can also avoids the cumulative effect of errors when registering the overlapped area pair-wisely.

  4. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. PMID:26795123

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  6. Principal curves for lumen center extraction and flow channel width estimation in 3-D arterial networks: theory, algorithm, and validation.

    PubMed

    Wong, Wilbur C K; So, Ronald W K; Chung, Albert C S

    2012-04-01

    We present an energy-minimization-based framework for locating the centerline and estimating the width of tubelike objects from their structural network with a nonparametric model. The nonparametric representation promotes simple modeling of nested branches and n -way furcations, i.e., structures that abound in an arterial network, e.g., a cerebrovascular circulation. Our method is capable of extracting the entire vascular tree from an angiogram in a single execution with a proper initialization. A succinct initial model from the user with arterial network inlets, outlets, and branching points is sufficient for complex vasculature. The novel method is based upon the theory of principal curves. In this paper, theoretical extension to grayscale angiography is discussed, and an algorithm to find an arterial network as principal curves is also described. Quantitative validation on a number of simulated data sets, synthetic volumes of 19 BrainWeb vascular models, and 32 Rotterdam Coronary Artery volumes was conducted. We compared the algorithm to a state-of-the-art method and further tested it on two clinical data sets. Our algorithmic outputs-lumen centers and flow channel widths-are important to various medical and clinical applications, e.g., vasculature segmentation, registration and visualization, virtual angioscopy, and vascular atlas formation and population study. PMID:22167625

  7. Improving the Accuracy of Estimated 3d Positions Using Multi-Temporal Alos/prism Triplet Images

    NASA Astrophysics Data System (ADS)

    Susaki, J.; Kishimoto, H.

    2015-03-01

    In this paper, we present a method to improve the accuracy of a digital surface model (DSM) by utilizing multi-temporal triplet images. The Advanced Land Observing Satellite (ALOS) / Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) measures triplet images in the forward, nadir, and backward view directions, and a DSM is generated from the obtained set of triplet images. To generate a certain period of DSM, multiple DSMs generated from individual triplet images are compared, and outliers are removed. Our proposed method uses a traditional surveying approach to increase observations and solves multiple observation equations from all triplet images via the bias-corrected rational polynomial coefficient (RPC) model. Experimental results from using five sets of PRISM triplet images taken of the area around Saitama, north of Tokyo, Japan, showed that the average planimetric and height errors in the coordinates estimated from multi-temporal triplet images were 3.26 m and 2.71 m, respectively, and that they were smaller than those generated by using each set of triplet images individually. As a result, we conclude that the proposed method is effective for stably generating accurate DSMs from multi-temporal triplet images.

  8. Scarce water resources and scarce data: Estimating recharge for a complex 3D groundwater flow model in arid regions

    NASA Astrophysics Data System (ADS)

    Gräbe, A. C.; Guttman, J.; Rödiger, T.; Siebert, C.; Merz, R.; Kolditz, O.

    2012-12-01

    Semi-arid to arid regions are usually characterized by a scarcity of precipitation and a lack of stream flow. Especially in desert environments, groundwater is one of the most important fresh water sources and its recharge is basically controlled by two main mechanisms: the direct regional infiltration of precipitation in the mountains and interdrainage areas in the first place and secondly the flood water infiltration through ephemeral channel beds (transmission loss). Due to extensive spatio-temporal data scarcity, direct quantitative estimations of groundwater recharge are often difficult to perform, and numerical models simulating the water fluxes, have to be applied to enable a quantitative approximation of the groundwater recharge. We made an assumption about the quantity of recharge for the subsurface catchment of the western Dead Sea escarpment, which is at the same time the input for the complex groundwater flow model of the Judea Group Aquifer. This can only be suggested if the hydrogeological situation in the tectonically complex region is fully understood. A number of simplified models of the Judea Group aquifer have been formulated and employed using a two-dimensional (one horizontal layered) numerical simulation of groundwater flow (Baida et al. 1978; Goldschtoff & Shachnai, 1980; Guttman, 2000; Laronne Ben-Itzhak & Gvirtzmann, 2005). However, all previous approaches focused only on a limited area of the Judea Group aquifer. We developed a high resolution regional groundwater flow model for the entire western basin of the Dead Sea. Whereas the structural model could be defined using a large geological dataset, the challenge was to generate the groundwater flow model with only limited well data. With the help of the scientific software OpenGeoSys (OGS) the challenge was reliably solved resulting in a simulation of the hydraulic characteristics (hydraulic conductivity and hydraulic head) of the cretaceous aquifer system, which was calibrated using PEST.

  9. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  10. 3-D inversion of magnetotelluric Phase Tensor

    NASA Astrophysics Data System (ADS)

    Patro, Prasanta; Uyeshima, Makoto

    2010-05-01

    Three-dimensional (3-D) inversion of the magnetotelluric (MT) has become a routine practice among the MT community due to progress of algorithms for 3-D inverse problems (e.g. Mackie and Madden, 1993; Siripunvaraporn et al., 2005). While availability of such 3-D inversion codes have increased the resolving power of the MT data and improved the interpretation, on the other hand, still the galvanic effects poses difficulties in interpretation of resistivity structure obtained from the MT data. In order to tackle the galvanic distortion of MT data, Caldwell et al., (2004) introduced the concept of phase tensor. They demonstrated how the regional phase information can be retrieved from the observed impedance tensor without any assumptions for structural dimension, where both the near surface inhomogeneity and the regional conductivity structures can be 3-D. We made an attempt to modify a 3-D inversion code (Siripunvaraporn et al., 2005) to directly invert the phase tensor elements. We present here the main modification done in the sensitivity calculation and then show a few synthetic studies and its application to the real data. The synthetic model study suggests that the prior model (m_0) setting is important in retrieving the true model. This is because estimation of correct induction scale length lacks in the phase tensor inversion process. Comparison between results from conventional impedance inversion and new phase tensor inversion suggests that, in spite of presence of the galvanic distortion (due to near surface checkerboard anomalies in our case), the new inverion algorithm retrieves the regional conductivitity structure reliably. We applied the new inversion to the real data from the Indian sub continent and compared with the results from conventional impedance inversion.

  11. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  12. Fully automated 2D-3D registration and verification.

    PubMed

    Varnavas, Andreas; Carrell, Tom; Penney, Graeme

    2015-12-01

    Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range. PMID:26387052

  13. Estimation of Effective Transmission Loss Due to Subtropical Hydrometeor Scatters using a 3D Rain Cell Model for Centimeter and Millimeter Wave Applications

    NASA Astrophysics Data System (ADS)

    Ojo, J. S.; Owolawi, P. A.

    2014-12-01

    The problem of hydrometeor scattering on microwave radio communication down links continues to be of interest as the number of the ground and earth space terminals continually grows The interference resulting from the hydrometeor scattering usually leads to the reduction in the signal-to-noise ratio ( SNR) at the affected terminal and at worst can even end up in total link outage. In this paper, an attempt has been made to compute the effective transmission loss due to subtropical hydrometeors on vertically polarized signals in Earth-satellite propagation paths in the Ku, Ka and V band frequencies based on the modified Capsoni 3D rain cell model. The 3D rain cell model has been adopted and modified using the subtropical log-normal distributions of raindrop sizes and introducing the equivalent path length through rain in the estimation of the attenuation instead of the usual specific attenuation in order to account for the attenuation of both wanted and unwanted paths to the receiver. The co-channels, interference at the same frequency is very prone to the higher amount of unwanted signal at the elevation considered. The importance of joint transmission is also considered.

  14. Three-dimensional (3D) coseismic deformation map produced by the 2014 South Napa Earthquake estimated and modeled by SAR and GPS data integration

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Albano, Matteo; Fernández, José; Palano, Mimmo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2016-04-01

    In this work we present a 3D map of coseismic displacements due to the 2014 Mw 6.0 South Napa earthquake, California, obtained by integrating displacement information data from SAR Interferometry (InSAR), Multiple Aperture Interferometry (MAI), Pixel Offset Tracking (POT) and GPS data acquired by both permanent stations and campaigns sites. This seismic event produced significant surface deformation along the 3D components causing several damages to vineyards, roads and houses. The remote sensing results, i.e. InSAR, MAI and POT, were obtained from the pair of SAR images provided by the Sentinel-1 satellite, launched on April 3rd, 2014. They were acquired on August 7th and 31st along descending orbits with an incidence angle of about 23°. The GPS dataset includes measurements from 32 stations belonging to the Bay Area Regional Deformation Network (BARDN), 301 continuous stations available from the UNAVCO and the CDDIS archives, and 13 additional campaign sites from Barnhart et al, 2014 [1]. These data constrain the horizontal and vertical displacement components proving to be helpful for the adopted integration method. We exploit the Bayes theory to search for the 3D coseismic displacement components. In particular, for each point, we construct an energy function and solve the problem to find a global minimum. Experimental results are consistent with a strike-slip fault mechanism with an approximately NW-SE fault plane. Indeed, the 3D displacement map shows a strong North-South (NS) component, peaking at about 15 cm, a few kilometers far from the epicenter. The East-West (EW) displacement component reaches its maximum (~10 cm) south of the city of Napa, whereas the vertical one (UP) is smaller, although a subsidence in the order of 8 cm on the east side of the fault can be observed. A source modelling was performed by inverting the estimated displacement components. The best fitting model is given by a ~N330° E-oriented and ~70° dipping fault with a prevailing

  15. Computer generation and application of 3-D model porous media: From pore-level geostatistics to the estimation of formation factor

    SciTech Connect

    Ioannidis, M.; Kwiecien, M.; Chatzis, I.

    1995-12-31

    This paper describes a new method for the computer generation of 3-D stochastic realizations of porous media using geostatistical information obtained from high-contrast 2-D images of pore casts. The stochastic method yields model porous media with statistical properties identical to those of their real counterparts. Synthetic media obtained in this manner can form the basis for a number of studies related to the detailed characterization of the porous microstructure and, ultimately, the prediction of important petrophysical and reservoir engineering properties. In this context, direct computer estimation of the formation resistivity factor is examined using a discrete random walk algorithm. The dependence of formation factor on measureable statistical properties of the pore space is also investigated.

  16. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  17. 3D vision assisted flexible robotic assembly of machine components

    NASA Astrophysics Data System (ADS)

    Ogun, Philips S.; Usman, Zahid; Dharmaraj, Karthick; Jackson, Michael R.

    2015-12-01

    Robotic assembly systems either make use of expensive fixtures to hold components in predefined locations, or the poses of the components are determined using various machine vision techniques. Vision-guided assembly robots can handle subtle variations in geometries and poses of parts. Therefore, they provide greater flexibility than the use of fixtures. However, the currently established vision-guided assembly systems use 2D vision, which is limited to three degrees of freedom. The work reported in this paper is focused on flexible automated assembly of clearance fit machine components using 3D vision. The recognition and the estimation of the poses of the components are achieved by matching their CAD models with the acquired point cloud data of the scene. Experimental results obtained from a robot demonstrating the assembly of a set of rings on a shaft show that the developed system is not only reliable and accurate, but also fast enough for industrial deployment.

  18. Bayesian Estimation of 3D Non-planar Fault Geometry and Slip: An application to the 2011 Megathrust (Mw 9.1) Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón

    2016-04-01

    Earthquake faults are generally considered planar (or of other simple geometry) in earthquake source parameter estimations. However, simplistic fault geometries likely result in biases in estimated slip distributions and increased fault slip uncertainties. In case of large subduction zone earthquakes, these biases and uncertainties propagate into tsunami waveform modeling and other calculations related to postseismic studies, Coulomb failure stresses, etc. In this research, we parameterize 3D non-planar fault geometry for the 2011 Tohoku-Oki earthquake (Mw 9.1) and estimate these geometrical parameters along with fault slip parameters from onland and offshore GPS using Bayesian inference. This non-planar fault is formed using several 3rd degree polynomials in along-strike (X-Y plane) and along-dip (X-Z plane) directions that are tied together using a triangular mesh. The coefficients of these polynomials constitute the fault geometrical parameters. We use the trench and locations of past seismicity as a priori information to constrain these fault geometrical parameters and the Laplacian to characterize the fault slip smoothness. Hyper-parameters associated to these a priori constraints are estimated empirically and the posterior probability distribution of the model (fault geometry and slip) parameters is sampled using an adaptive Metropolis Hastings algorithm. The across-strike uncertainties in the fault geometry (effectively the local fault location) around high-slip patches increases from 6 km at 10km depth to about 35 km at 50km depth, whereas around low-slip patches the uncertainties are larger (from 7 km to 70 km). Uncertainties in reverse slip are found to be higher at high slip patches than at low slip patches. In addition, there appears to be high correlation between adjacent patches of high slip. Our results demonstrate that we can constrain complex non-planar fault geometry together with fault slip from GPS data using past seismicity as a priori

  19. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  20. Accurate three-dimensional pose recognition from monocular images using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Diaz-Ramirez, Victor H.; Kober, Vitaly; Montemayor, Antonio S.; Pantrigo, Juan J.

    2016-06-01

    An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

  1. 3D Transient Hydraulic Tomography (3DTHT): An Efficient Field and Modeling Method for High-Resolution Estimation of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.

    2012-12-01

    The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3

  2. Different scenarios for inverse estimation of soil hydraulic parameters from double-ring infiltrometer data using HYDRUS-2D/3D

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Parisa; Ghorbani-Dashtaki, Shoja; Mosaddeghi, Mohammad Reza; Shirani, Hossein; Nodoushan, Ali Reza Mohammadi

    2016-04-01

    In this study, HYDRUS-2D/3D was used to simulate ponded infiltration through double-ring infiltrometers into a hypothetical loamy soil profile. Twelve scenarios of inverse modelling (divided into three groups) were considered for estimation of Mualem-van Genuchten hydraulic parameters. In the first group, simulation was carried out solely using cumulative infiltration data. In the second group, cumulative infiltration data plus water content at h = -330 cm (field capacity) were used as inputs. In the third group, cumulative infiltration data plus water contents at h = -330 cm (field capacity) and h = -15 000 cm (permanent wilting point) were used simultaneously as predictors. The results showed that numerical inverse modelling of the double-ring infiltrometer data provided a reliable alternative method for determining soil hydraulic parameters. The results also indicated that by reducing the number of hydraulic parameters involved in the optimization process, the simulation error is reduced. The best one in infiltration simulation which parameters α, n, and Ks were optimized using the infiltration data and field capacity as inputs. Including field capacity as additional data was important for better optimization/definition of soil hydraulic functions, but using field capacity and permanent wilting point simultaneously as additional data increased the simulation error.

  3. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  4. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  5. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  6. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  7. Pose-Invariant Face Recognition via RGB-D Images

    PubMed Central

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  8. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis

    PubMed Central

    Menéndez-González, Manuel; Salas-Pacheco, José M.; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the “yearly rate of Relative Thalamic Atrophy” (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  9. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis.

    PubMed

    Menéndez-González, Manuel; Salas-Pacheco, José M; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the "yearly rate of Relative Thalamic Atrophy" (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  10. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  11. 3D face reconstruction from limited images based on differential evolution

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-09-01

    3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.

  12. 2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment

    NASA Astrophysics Data System (ADS)

    Bifulco, P.; Cesarelli, M.; Allen, R.; Romano, M.; Fratini, A.; Pasquariello, G.

    2009-12-01

    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.

  13. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  14. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  15. Posing Einstein's Question: Questioning Einstein's Pose.

    ERIC Educational Resources Information Center

    Topper, David; Vincent, Dwight E.

    2000-01-01

    Discusses the events surrounding a famous picture of Albert Einstein in which he poses near a blackboard containing a tensor form of his 10 field equations for pure gravity with a question mark after it. Speculates as to the content of Einstein's lecture and the questions he might have had about the equation. (Contains over 30 references.) (WRM)

  16. RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot

    NASA Astrophysics Data System (ADS)

    Mostofi, N.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.

    2014-11-01

    3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.

  17. Estimating a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head from a commercial OCT device

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.

    2016-03-01

    The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.

  18. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  19. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  20. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  1. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  2. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  3. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  4. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  5. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  6. Automatic pose correction for image-guided nonhuman primate brain surgery planning

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Chen, Antong; Hines, Catherine; Dogdas, Belma; Bone, Ashleigh; Lodge, Kenneth; O'Malley, Stacey; Winkelmann, Christopher T.; Bagchi, Ansuman; Lubbers, Laura S.; Uslaner, Jason M.; Johnson, Colena; Renger, John; Zariwala, Hatim A.

    2016-03-01

    Intracranial delivery of recombinant DNA and neurochemical analysis in nonhuman primate (NHP) requires precise targeting of various brain structures via imaging derived coordinates in stereotactic surgeries. To attain targeting precision, the surgical planning needs to be done on preoperative three dimensional (3D) CT and/or MR images, in which the animals head is fixed in a pose identical to the pose during the stereotactic surgery. The matching of the image to the pose in the stereotactic frame can be done manually by detecting key anatomical landmarks on the 3D MR and CT images such as ear canal and ear bar zero position. This is not only time intensive but also prone to error due to the varying initial poses in the images which affects both the landmark detection and rotation estimation. We have introduced a fast, reproducible, and semi-automatic method to detect the stereotactic coordinate system in the image and correct the pose. The method begins with a rigid registration of the subject images to an atlas and proceeds to detect the anatomical landmarks through a sequence of optimization, deformable and multimodal registration algorithms. The results showed similar precision (maximum difference of 1.71 in average in-plane rotation) to a manual pose correction.

  7. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  8. Non-Iterative Rigid 2D/3D Point-Set Registration Using Semidefinite Programming

    NASA Astrophysics Data System (ADS)

    Khoo, Yuehaw; Kapoor, Ankur

    2016-07-01

    We describe a convex programming framework for pose estimation in 2D/3D point-set registration with unknown point correspondences. We give two mixed-integer nonlinear program (MINP) formulations of the 2D/3D registration problem when there are multiple 2D images, and propose convex relaxations for both of the MINPs to semidefinite programs (SDP) that can be solved efficiently by interior point methods. Our approach to the 2D/3D registration problem is non-iterative in nature as we jointly solve for pose and correspondence. Furthermore, these convex programs can readily incorporate feature descriptors of points to enhance registration results. We prove that the convex programs exactly recover the solution to the original nonconvex 2D/3D registration problem under noiseless condition. We apply these formulations to the registration of 3D models of coronary vessels to their 2D projections obtained from multiple intra-operative fluoroscopic images. For this application, we experimentally corroborate the exact recovery property in the absence of noise and further demonstrate robustness of the convex programs in the presence of noise.

  9. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  10. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  11. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  12. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  13. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  14. Estimating subthreshold tumor on MRI using a 3D-DTI growth model for GBM: An adjunct to radiation therapy planning.

    PubMed

    Hathout, Leith; Patel, Vishal

    2016-08-01

    Mathematical modeling and serial magnetic resonance imaging (MRI) used to calculate patient-specific rates of tumor diffusion, D, and proliferation, ρ, can be combined to simulate glioblastoma multiforme (GBM) growth. We showed that the proportion and distribution of tumor cells below the MRI threshold are determined by the D/ρ ratio of the tumor. As most radiation fields incorporate a 1‑3 cm margin to account for subthreshold tumor, accurate characterization of subthreshold tumor aids the design of optimal radiation fields. This study compared two models: a standard one‑dimensional (1D) isotropic model and a three‑dimensional (3D) anisotropic model using the advanced imaging method of diffusion tensor imaging (DTI) ‑ with regards to the D/ρ ratio's effect on the proportion and spatial extent of the subthreshold tumor. A validated reaction‑diffusion equation accounting for tumor diffusion and proliferation modeled tumor concentration in time and space. For the isotropic and anisotropic models, nine tumors with different D/ρ ratios were grown to a T1 radius of 1.5 cm. For each tumor, the percent and extent of tumor cells beyond the T2 radius were calculated. For both models, higher D/ρ ratios were correlated with a greater proportion and extent of subthreshold tumor. Anisotropic modeling demonstrated a higher proportion and extent of subthreshold tumor than predicted by the isotropic modeling. Because the quantity and distribution of subthreshold tumor depended on the D/ρ ratio, this ratio should influence radiation field demarcation. Furthermore, the use of DTI data to account for anisotropic tumor growth allows for more refined characterization of the subthreshold tumor based on the patient-specific D/ρ ratio. PMID:27374420

  15. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  16. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  17. Multimodal Deep Autoencoder for Human Pose Recovery.

    PubMed

    Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng

    2015-12-01

    Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method. PMID:26452284

  18. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  19. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  20. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  1. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the

  2. Regoliths in 3-D

    NASA Technical Reports Server (NTRS)

    Grant, John; Cheng, Andrew; Delamere, Allen; Gorevan, Steven; Korotev, Randy; McKay, David; Schmitt, Harrison; Zarnecki, John

    1996-01-01

    A planetary regolith is any layer of fragments, unconsolidated material that may or may not be textually or compositionally altered relative to underlying substrate and occurs on the outer surface of a solar system body. This includes fragmented material from volcanic, sedimentary, and meteoritic infall sources, and derived by any process (e.g. impact and all other endogenic or exogenic processes). Many measurements that can be made from orbit or from Earth-based observations provide information only about the uppermost portions of a regolith and not the underlying substrate(s). Thus an understanding of the formation processes, physical properties, composition, and evolution of planetary regoliths is essential in answering scientific questions posed by the Committee on Planetary and Lunar Exploration (COMPLEX). This paper provides examples of measurements required to answer these critical science questions.

  3. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method. PMID:22948355

  4. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  5. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  6. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  7. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  8. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  9. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  10. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. 3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies

    PubMed Central

    Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno

    2016-01-01

    We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628

  12. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  13. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    NASA Astrophysics Data System (ADS)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  14. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  15. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    PubMed Central

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344

  16. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera.

    PubMed

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344

  17. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  18. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  19. Estimation of the maximum allowable loading amount of COD in Luoyuan Bay by a 3-D COD transport and transformation model

    NASA Astrophysics Data System (ADS)

    Wu, Jialin; Li, Keqiang; Shi, Xiaoyong; Liang, Shengkang; Han, Xiurong; Ma, Qimin; Wang, Xiulin

    2014-08-01

    The rapid economic and social developments in the Luoyuan and Lianjiang counties of Fujian Province, China, raise certain environment and ecosystem issues. The unusual phytoplankton bloom and eutrophication, for example, have increased in severity in Luoyuan Bay (LB). The constant increase of nutrient loads has largely caused the environmental degradation in LB. Several countermeasures have been implemented to solve these environmental problems. The most effective of these strategies is the reduction of pollutant loadings into the sea in accordance with total pollutant load control (TPLC) plans. A combined three-dimensional hydrodynamic transport-transformation model was constructed to estimate the marine environmental capacity of chemical oxygen demand (COD). The allowed maximum loadings for each discharge unit in LB were calculated with applicable simulation results. The simulation results indicated that the environmental capacity of COD is approximately 11×104 t year-1 when the water quality complies with the marine functional zoning standards for LB. A pollutant reduction scheme to diminish the present levels of mariculture- and domestic-based COD loadings is based on the estimated marine COD environmental capacity. The obtained values imply that the LB waters could comply with the targeted water quality criteria. To meet the revised marine functional zoning standards, discharge loadings from discharge units 1 and 11 should be reduced to 996 and 3236 t year-1, respectively.

  20. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  1. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  2. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  3. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  4. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  6. A new automatic method for estimation of magnetization and density contrast by using three-dimensional (3D) magnetic and gravity anomalies

    NASA Astrophysics Data System (ADS)

    Bektas, Ozcan; Ates, Abdullah; Aydemir, Attila

    2012-09-01

    In this paper, a new method estimating the ratio of magnetic intensity to density contrast of a body that creates magnetic and gravity anomalies is presented. Although magnetic intensity and density of an anomalous body can be measured in the laboratory from the surface samples, the proposed new method is developed to determine the magnetic intensity and density contrast from the magnetic and gravity anomalies when the surface samples are not available. In this method, density contrast diagrams of a synthetic model are produced and these diagrams are prepared as graphics where the magnetic intensity (J) is given in the vertical axis and Psg (pseudogravity)/Grv (gravity) values in horizontal axis. The density contrast diagrams can be prepared as three sub-diagrams to show the low, middle and high ranges allowing obtain density contrast of body. The proposed method is successfully tested on the synthetic models with and without error. In order to verify the results of the method, an alternative method known as root-mean-square (RMS) is also applied onto the same models to determine the density contrast. In this manner, maximum correlation between the observed gravity and calculated gravity anomalies is searched and confirmation of the results is supported with the RMS method. In order to check the reliability of the new method on the field data, the proposed method is applied to the Tetbury (England) and Hanobasi (Central Turkey) magnetic and gravity anomalies. Field models are correlated with available geological, seismic and borehole data. The results are found consistent and reliable for estimating the magnetic intensity and density contrast of the causative bodies.

  7. A joint data assimilation system (Tan-Tracker) to simultaneously estimate surface CO2 fluxes and 3-D atmospheric CO2 concentrations from observations

    NASA Astrophysics Data System (ADS)

    Tian, X.; Xie, Z.; Liu, Y.; Cai, Z.; Fu, Y.; Zhang, H.; Feng, L.

    2014-12-01

    We have developed a novel framework ("Tan-Tracker") for assimilating observations of atmospheric CO2 concentrations, based on the POD-based (proper orthogonal decomposition) ensemble four-dimensional variational data assimilation method (PODEn4DVar). The high flexibility and the high computational efficiency of the PODEn4DVar approach allow us to include both the atmospheric CO2 concentrations and the surface CO2 fluxes as part of the large state vector to be simultaneously estimated from assimilation of atmospheric CO2 observations. Compared to most modern top-down flux inversion approaches, where only surface fluxes are considered as control variables, one major advantage of our joint data assimilation system is that, in principle, no assumption on perfect transport models is needed. In addition, the possibility for Tan-Tracker to use a complete dynamic model to consistently describe the time evolution of CO2 surface fluxes (CFs) and the atmospheric CO2 concentrations represents a better use of observation information for recycling the analyses at each assimilation step in order to improve the forecasts for the following assimilations. An experimental Tan-Tracker system has been built based on a complete augmented dynamical model, where (1) the surface atmosphere CO2 exchanges are prescribed by using a persistent forecasting model for the scaling factors of the first-guess net CO2 surface fluxes and (2) the atmospheric CO2 transport is simulated by using the GEOS-Chem three-dimensional global chemistry transport model. Observing system simulation experiments (OSSEs) for assimilating synthetic in situ observations of surface CO2 concentrations are carefully designed to evaluate the effectiveness of the Tan-Tracker system. In particular, detailed comparisons are made with its simplified version (referred to as TT-S) with only CFs taken as the prognostic variables. It is found that our Tan-Tracker system is capable of outperforming TT-S with higher assimilation

  8. Estimation of pulmonary arterial volume changes in the normal and hypertensive fawn-hooded rat from 3D micro-CT data

    NASA Astrophysics Data System (ADS)

    Molthen, Robert C.; Wietholt, Christian; Haworth, Steven T.; Dawson, Christopher A.

    2002-04-01

    In the study of pulmonary vascular remodeling, much can be learned from observing the morphological changes undergone in the pulmonary arteries of the rat lung when exposed to chronic hypoxia or other challenges which elicit a remodeling response. Remodeling effects include thickening of vessel walls, and loss of wall compliance. Morphometric data can be used to localize the hemodynamic and functional consequences. We developed a CT imaging method for measuring the pulmonary arterial tree over a range of pressures in rat lungs. X-ray micro-focal isotropic volumetric imaging of the arterial tree in the intact rat lung provides detailed information on the size, shape and mechanical properties of the arterial network. In this study, we investigate the changes in arterial volume with step changes in pressure for both normoxic and hypoxic Fawn-Hooded (FH) rats. We show that FH rats exposed to hypoxia tend to have reduced arterial volume changes for the same preload when compared to FH controls. A secondary objective of this work is to quantify various phenotypes to better understand the genetic contribution of vascular remodeling in the lungs. This volume estimation method shows promise in high throughput phenotyping, distinguishing differences in the pulmonary hypertensive rat model.

  9. Estimation of water distribution and degradation mechanisms in polymer electrolyte membrane fuel cell gas diffusion layers using a 3D Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Seidenberger, K.; Wilhelm, F.; Schmitt, T.; Lehnert, W.; Scholta, J.

    Understanding of both water management in PEM fuel cells and degradation mechanisms of the gas diffusion layer (GDL) and their mutual impact is still at least incomplete. Different modelling approaches contribute to gain deeper insight into the processes occurring during fuel cell operation. Considering the GDL, the models can help to obtain information about the distribution of liquid water within the material. Especially, flooded regions can be identified, and the water distribution can be linked to the system geometry. Employed for material development, this information can help to increase the life time of the GDL as a fuel cell component and the fuel cell as the entire system. The Monte Carlo (MC) model presented here helps to simulate and analyse the water household in PEM fuel cell GDLs. This model comprises a three-dimensional, voxel-based representation of the GDL substrate, a section of the flowfield channel and the corresponding rib. Information on the water distribution within the substrate part of the GDL can be estimated.

  10. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  11. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  12. Rubber Impact on 3D Textile Composites

    NASA Astrophysics Data System (ADS)

    Heimbs, Sebastian; Van Den Broucke, Björn; Duplessis Kergomard, Yann; Dau, Frederic; Malherbe, Benoit

    2012-06-01

    A low velocity impact study of aircraft tire rubber on 3D textile-reinforced composite plates was performed experimentally and numerically. In contrast to regular unidirectional composite laminates, no delaminations occur in such a 3D textile composite. Yarn decohesions, matrix cracks and yarn ruptures have been identified as the major damage mechanisms under impact load. An increase in the number of 3D warp yarns is proposed to improve the impact damage resistance. The characteristic of a rubber impact is the high amount of elastic energy stored in the impactor during impact, which was more than 90% of the initial kinetic energy. This large geometrical deformation of the rubber during impact leads to a less localised loading of the target structure and poses great challenges for the numerical modelling. A hyperelastic Mooney-Rivlin constitutive law was used in Abaqus/Explicit based on a step-by-step validation with static rubber compression tests and low velocity impact tests on aluminium plates. Simulation models of the textile weave were developed on the meso- and macro-scale. The final correlation between impact simulation results on 3D textile-reinforced composite plates and impact test data was promising, highlighting the potential of such numerical simulation tools.

  13. Accurate 3D kinematic measurement of temporomandibular joint using X-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Matsumoto, Akiko; Sugamoto, Kazuomi; Matsumoto, Ken; Kakimoto, Naoya; Yura, Yoshiaki

    2014-04-01

    Accurate measurement and analysis of 3D kinematics of temporomandibular joint (TMJ) is very important for assisting clinical diagnosis and treatment of prosthodontics and orthodontics, and oral surgery. This study presents a new 3D kinematic measurement technique of the TMJ using X-ray fluoroscopic images, which can easily obtain the TMJ kinematic data in natural motion. In vivo kinematics of the TMJ (maxilla and mandibular bone) is determined using a feature-based 2D/3D registration, which uses beads silhouette on fluoroscopic images and 3D surface bone models with beads. The 3D surface models of maxilla and mandibular bone with beads were created from CT scans data of the subject using the mouthpiece with the seven strategically placed beads. In order to validate the accuracy of pose estimation for the maxilla and mandibular bone, computer simulation test was performed using five patterns of synthetic tantalum beads silhouette images. In the clinical applications, dynamic movement during jaw opening and closing was conducted, and the relative pose of the mandibular bone with respect to the maxilla bone was determined. The results of computer simulation test showed that the root mean square errors were sufficiently smaller than 1.0 mm and 1.0 degree. In the results of clinical application, during jaw opening from 0.0 to 36.8 degree of rotation, mandibular condyle exhibited 19.8 mm of anterior sliding relative to maxillary articular fossa, and these measurement values were clinically similar to the previous reports. Consequently, present technique was thought to be suitable for the 3D TMJ kinematic analysis.

  14. A 3D approach for object recognition in illuminated scenes with adaptive correlation filters

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2015-09-01

    In this paper we solve the problem of pose recognition of a 3D object in non-uniformly illuminated and noisy scenes. The recognition system employs a bank of space-variant correlation filters constructed with an adaptive approach based on local statistical parameters of the input scene. The position and orientation of the target are estimated with the help of the filter bank. For an observed input frame, the algorithm computes the correlation process between the observed image and the bank of filters using a combination of data and task parallelism by taking advantage of a graphics processing unit (GPU) architecture. The pose of the target is estimated by finding the template that better matches the current view of target within the scene. The performance of the proposed system is evaluated in terms of recognition accuracy, location and orientation errors, and computational performance.

  15. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  16. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  17. Student-Posed Problems

    NASA Astrophysics Data System (ADS)

    Harper, Kathleen A.; Etkina, Eugenia

    2002-10-01

    As part of weekly reports,1 structured journals in which students answer three standard questions each week, they respond to the prompt, If I were the instructor, what questions would I ask or problems assign to determine if my students understood the material? An initial analysis of the results shows that some student-generated problems indicate fundamental misunderstandings of basic physical concepts. A further investigation explores the relevance of the problems to the week's material, whether the problems are solvable, and the type of problems (conceptual or calculation-based) written. Also, possible links between various characteristics of the problems and conceptual achievement are being explored. The results of this study spark many more questions for further work. A summary of current findings will be presented, along with its relationship to previous work concerning problem posing.2 1Etkina, E. Weekly Reports;A Two-Way Feedback Tool, Science Education, 84, 594-605 (2000). 2Mestre, J.P., Probing Adults Conceptual Understanding and Transfer of Learning Via Problem Posing, Journal of Applied Developmental Psychology, 23, 9-50 (2002).

  18. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  19. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  20. Towards a mobility diagnostic tool: tracking rollator users' leg pose with a monocular vision system.

    PubMed

    Ng, Samantha; Fakih, Adel; Fourney, Adam; Poupart, Pascal; Zelek, John

    2009-01-01

    Cognitive assistance of a rollator (wheeled walker) user tends to reduce the attentional capacity of the user and may impact her stability. Hence, it is important to understand and track the pose of rollator users before augmenting a rollator with some form of cognitive assistance. While the majority of current markerless vision systems focus on estimating 2D and 3D walking motion in the sagittal plane, we wish to estimate the 3D pose of rollator users' lower limbs from observing image sequences in the coronal (frontal) plane. Our apparatus poses a unique set of challenges: a single monocular view of only the lower limbs and a frontal perspective of the rollator user. Since motion in the coronal plane is relatively subtle, we explore multiple cues within a Bayesian probabilistic framework to formulate a posterior estimate for a given subject's leg limbs. In this work, our focus is on evaluating the appearance model (the cues). Preliminary experiments indicate that texture and colour cues conditioned on the appearance of a rollator user outperform more general cues, at the cost of manually initializing the appearance offline. PMID:19963744

  1. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  2. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  3. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  4. Visualization of liver in 3-D

    NASA Astrophysics Data System (ADS)

    Chen, Chin-Tu; Chou, Jin-Shin; Giger, Maryellen L.; Kahn, Charles E., Jr.; Bae, Kyongtae T.; Lin, Wei-Chung

    1991-05-01

    Visualization of the liver in three dimensions (3-D) can improve the accuracy of volumetric estimation and also aid in surgical planning. We have developed a method for 3-D visualization of the liver using x-ray computed tomography (CT) or magnetic resonance (MR) images. This method includes four major components: (1) segmentation algorithms for extracting liver data from tomographic images; (2) interpolation techniques for both shape and intensity; (3) schemes for volume rendering and display, and (4) routines for electronic surgery and image analysis. This method has been applied to cases from a living-donor liver transplant project and appears to be useful for surgical planning.

  5. Real-Time Head Pose Tracking with Online Face Template Reconstruction.

    PubMed

    Li, Songnan; Ngan, King Ngi; Paramesran, Raveendran; Sheng, Lu

    2016-09-01

    We propose a real-time method to accurately track the human head pose in the 3-dimensional (3D) world. Using a RGB-Depth camera, a face template is reconstructed by fitting a 3D morphable face model, and the head pose is determined by registering this user-specific face template to the input depth video. PMID:26584487

  6. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  7. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  8. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  9. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation.

    PubMed

    Yang, L; Wang, J; Ando, T; Kubota, A; Yamashita, H; Sakuma, I; Chiba, T; Kobayashi, E

    2015-03-01

    This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures. PMID:25263644

  12. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  13. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  14. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  15. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  16. Gender and ethnicity specific generic elastic models from a single 2D image for novel 2D pose face synthesis and recognition.

    PubMed

    Heo, Jingu; Savvides, Marios

    2012-12-01

    In this paper, we propose a novel method for generating a realistic 3D human face from a single 2D face image for the purpose of synthesizing new 2D face images at arbitrary poses using gender and ethnicity specific models. We employ the Generic Elastic Model (GEM) approach, which elastically deforms a generic 3D depth-map based on the sparse observations of an input face image in order to estimate the depth of the face image. Particularly, we show that Gender and Ethnicity specific GEMs (GE-GEMs) can approximate the 3D shape of the input face image more accurately, achieving a better generalization of 3D face modeling and reconstruction compared to the original GEM approach. We qualitatively validate our method using publicly available databases by showing each reconstructed 3D shape generated from a single image and new synthesized poses of the same person at arbitrary angles. For quantitative comparisons, we compare our synthesized results against 3D scanned data and also perform face recognition using synthesized images generated from a single enrollment frontal image. We obtain promising results for handling pose and expression changes based on the proposed method. PMID:22201062

  17. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  18. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  19. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  2. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems. PMID:24058046

  3. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  4. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  5. Image-based 3D scene analysis for navigation of autonomous airborne systems

    NASA Astrophysics Data System (ADS)

    Jaeger, Klaus; Bers, Karl-Heinz

    2001-10-01

    In this paper we describe a method for automatic determination of sensor pose (position and orientation) related to a 3D landmark or scene model. The method is based on geometrical matching of 2D image structures with projected elements of the associated 3D model. For structural image analysis and scene interpretation, a blackboard-based production system is used resulting in a symbolic description of image data. Knowledge of the approximated sensor pose measured for example by IMU or GPS enables to estimate an expected model projection used for solving the correspondence problem of image structures and model elements. These correspondences are presupposed for pose computation carried out by nonlinear numerical optimization algorithms. We demonstrate the efficiency of the proposed method by navigation update approaching a bridge scenario and flying over urban area, whereas data were taken with airborne infrared sensors in high oblique view. In doing so we simulated image-based navigation for target engagement and midcourse guidance suited for the concepts of future autonomous systems like missiles and drones.

  6. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  7. Measurement error analysis of the 3D four-wheel aligner

    NASA Astrophysics Data System (ADS)

    Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun

    2013-10-01

    Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.

  8. Robust pose determination for autonomous docking

    SciTech Connect

    Goddard, J.S.; Jatko, W.B.; Ferrell, R.K.; Gleason, S.S.

    1995-12-31

    This paper describes current work at the Oak Ridge National Laboratory to develop a robotic vision system capable of recognizing designated objects by their intrinsic geometry. This method, based on single camera vision, combines point features and a model-based technique using geometric feature matching for the pose calculation. In this approach, 2-D point features are connected into higher-order shapes and then matched with corresponding features of the model. Pose estimates are made using a closed-form point solution based on model features of four coplanar points. Rotations are represented by quaternions that simplify the calculations in determining the least squares solution for the coordinate transformation. This pose determination method including image acquisition, feature extraction, feature correspondence, and pose calculation has been implemented on a real-time system using a standard camera and image processing hardware. Experimental results are given for relative error measurements.

  9. Inter-point procrustes: identifying regional and large differences in 3D anatomical shapes.

    PubMed

    Lekadir, Karim; Frangi, Alejandro F; Yang, Guang-Zhong

    2012-01-01

    This paper presents a new approach for the robust alignment and interpretation of 3D anatomical structures with large and localized shape differences. In such situations, existing techniques based on the well-known Procrustes analysis can be significantly affected due to the introduced non-Gaussian distribution of the residuals. In the proposed technique, influential points that induce large dissimilarities are identified and displaced with the aim to obtain an intermediate template with an improved distribution of the residuals. The key element of the algorithm is the use of pose invariant shape variables to robustly guide both the influential point detection and displacement steps. The intermediate template is then used as the basis for the estimation of the final pose parameters between the source and destination shapes, enabling to effectively highlight the regional differences of interest. The validation using synthetic and real datasets of different morphologies demonstrates robustness up-to 50% regional differences and potential for shape classification. PMID:23286119

  10. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  11. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  12. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  13. Use of 3D DCE-MRI for the estimation of renal perfusion and glomerular filtration rate: an intrasubject comparison of FLASH and KWIC with a comprehensive framework for evaluation.

    PubMed

    Eikefjord, Eli; Andersen, Erling; Hodneland, Erlend; Zöllner, Frank; Lundervold, Arvid; Svarstad, Einar; Rørvik, Jarle

    2015-03-01

    OBJECTIVE. The purpose of this article is to compare two 3D dynamic contrast-enhanced (DCE) MRI measurement techniques for MR renography, a radial k-space weighted image contrast (KWIC) sequence and a cartesian FLASH sequence, in terms of intrasubject differences in estimates of renal functional parameters and image quality characteristics. SUBJECTS AND METHODS. Ten healthy volunteers underwent repeated breath-hold KWIC and FLASH sequence examinations with temporal resolutions of 2.5 and 2.8 seconds, respectively. A two-compartment model was used to estimate MRI-derived perfusion parameters and glomerular filtration rate (GFR). The latter was compared with the iohexol GFR and the estimated GFR. Image quality was assessed using a visual grading characteristic analysis of relevant image quality criteria and signal-to-noise ratio calculations. RESULTS. Perfusion estimates from FLASH were closer to literature reference values than were the KWIC sequences. In relation to the iohexol GFR (mean [± SD], 103 ± 11 mL/min/1.73 m(2)), KWIC produced significant underestimations and larger bias in GFR values (mean, 70 ± 30 mL/min/1.73 m(2); bias = -33.2 mL/min/1.73 m(2)) compared with the FLASH GFR (110 ± 29 mL/min/1.73 m(2); bias = 6.4 mL/min/1.73 m(2)). KWIC was statistically significantly (p < 0.005) more impaired by artifacts than was FLASH (AUC = 0.18). The average signal-enhancement ratio (delta ratio) in the cortex was significantly lower for KWIC (delta ratio = 0.99) than for FLASH (delta ratio = 1.40). Other visually graded image quality characteristics and signal-to-noise ratio measurements were not statistically significantly different. CONCLUSION. Using the same postprocessing scheme and pharmacokinetic model, FLASH produced more accurate perfusion and filtration parameters than did KWIC compared with clinical reference methods. Our data suggest an apparent relationship between image quality characteristics and the degree of stability in the numeric model

  14. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  15. Origin of hepatitis C virus genotype 3 in Africa as estimated through an evolutionary analysis of the full-length genomes of nine subtypes, including the newly sequenced 3d and 3e.

    PubMed

    Li, Chunhua; Lu, Ling; Murphy, Donald G; Negro, Francesco; Okamoto, Hiroaki

    2014-08-01

    We characterized the full-length genomes of nine hepatitis C virus genotype 3 (HCV-3) isolates: QC7, QC8, QC9, QC10, QC34, QC88, NE145, NE274 and 811. To the best of our knowledge, NE274 and NE145 were the first full-length genomes for confirming the provisionally assigned subtypes 3d and 3e, respectively, whereas 811 represented the first HCV-3 isolate that had its extreme 3' UTR terminus sequenced. Based on these full-length genomes, together with 42 references representing eight assigned subtypes and an unclassified variant of HCV-3, and 10 sequences of six other genotypes, a timescaled phylogenetic tree was reconstructed after an evolutionary analysis using a coalescent Bayesian procedure. The results indicated that subtypes 3a, 3d and 3e formed a subset with a common ancestor dated to ~202.89 [95% highest posterior density (HPD): 160.11, 264.6] years ago. The analysis of all of the HCV-3 sequences as a single lineage resulted in the dating of the divergence time to ~457.81 (95% HPD: 350.62, 587.53) years ago, whereas the common ancestor of all of the seven HCV genotypes dated to ~780.86 (95% HPD: 592.15, 1021.34) years ago. As subtype 3h and the unclassified variant were relatives, and represented the oldest HCV-3 lineages with origins in Africa and the Middle East, these findings may indicate the ancestral origin of HCV-3 in Africa. We speculate that the ancestral HCV-3 strains may have been brought to South Asia from Africa by land and/or across the sea to result in its indigenous circulation in that region. The spread was estimated to have occurred in the era after Vasco da Gama had completed his expeditions by sailing along the eastern coast of Africa to India. However, before this era, Arabians had practised slave trading from Africa to the Middle East and South Asia for centuries, which may have mediated the earliest spread of HCV-3. PMID:24795446

  16. Image-based indoor localization system based on 3D SfM model

    NASA Astrophysics Data System (ADS)

    Lu, Guoyu; Kambhamettu, Chandra

    2013-12-01

    Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.

  17. Interactive initialization for 2D/3D intra-operative registration using the Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Güler, Özgur; Yaniv, Ziv

    2013-03-01

    All 2D/3D anatomy based rigid registration algorithms are iterative, requiring an initial estimate of the 3D data pose. Current initialization methods have limited applicability in the operating room setting, due to the constraints imposed by this environment or due to insufficient accuracy. In this work we use the Microsoft Kinect device to allow the surgeon to interactively initialize the registration process. A Kinect sensor is used to simulate the mouse-based operations in a conventional manual initialization approach, obviating the need for physical contact with an input device. Different gestures from both arms are detected from the sensor in order to set or switch the required working contexts. 3D hand motion provides the six degree-of-freedom controls for manipulating the pre-operative data in the 3D space. We evaluated our method for both X-ray/CT and X-ray/MR initialization using three publicly available reference data sets. Results show that, with initial target registration errors of 117:7 +/- 28:9 mm a user is able to achieve final errors of 5:9 +/- 2:6 mm within 158 +/- 65 sec using the Kinect-based approach, compared to 4:8+/-2:0 mm and 88+/-60 sec when using the mouse for interaction. Based on these results we conclude that this method is sufficiently accurate for initialization of X-ray/CT and X-ray/MR registration in the OR.

  18. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  19. Quaternion epipolar decomposition for camera pose identification and animation

    NASA Astrophysics Data System (ADS)

    Skarbek, W.; Tomaszewski, M.

    2013-03-01

    In the literature of computer vision, computer graphics and robotics, the use of quaternions is exclusively related to 3D rotation representation and interpolation. In this research we found how epipoles in multi-camera systems can be used to represent camera poses in the quaternion domain. The rotational quaternion is decomposed in two epipole rotational quaternions and one z axis rotational quaternion. Quadratic form of the essential matrix is also related to quaternion factors. Thus, five pose parameters are distributed into three independent rotational quaternions resulting in measurement error separation at camera pose identification and greater flexibility at virtual camera animation. The experimental results refer to the design of free viewpoint television.

  20. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  1. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  2. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information.

    PubMed

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  3. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  4. Tomographic compressive holographic reconstruction of 3D objects

    NASA Astrophysics Data System (ADS)

    Nehmetallah, G.; Williams, L.; Banerjee, P. P.

    2012-10-01

    Compressive holography with multiple projection tomography is applied to solve the inverse ill-posed problem of reconstruction of 3D objects with high axial accuracy. To visualize the 3D shape, we propose Digital Tomographic Compressive Holography (DiTCH), where projections from more than one direction as in tomographic imaging systems can be employed, so that a 3D shape with better axial resolution can be reconstructed. We compare DiTCH with single-beam holographic tomography (SHOT) which is based on Fresnel back-propagation. A brief theory of DiTCH is presented, and experimental results of 3D shape reconstruction of objects using DITCH and SHOT are compared.

  5. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  6. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  7. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  8. Improvements in Intrinsic Feature Pose Measurement for Awake Animal Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; McKisson, J; Smith, M F; Stolin, Alexander

    2010-01-01

    Development has continued with intrinsic feature optical motion tracking for awake animal imaging to measure 3D position and orientation (pose) for motion compensated reconstruction. Prior imaging results have been directed towards head motion measurement for SPECT brain studies in awake unrestrained mice. This work improves on those results in extracting and tracking intrinsic features from multiple camera images and computing pose changes from the tracked features over time. Previously, most motion tracking for 3D imaging has been limited to measuring extrinsic features such as retro-reflective markers applied to an animal s head. While this approach has been proven to be accurate, the use of external markers is undesirable for several reasons. The intrinsic feature approach has been further developed from previous work to provide full pose measurements for a live mouse scan. Surface feature extraction, matching, and pose change calculation with point tracking and accuracy results are described. Experimental pose calculation and 3D reconstruction results from live images are presented.

  9. Improvements in intrinsic feature pose measurement for awake animal imaging

    SciTech Connect

    J.S. Goddard, J.S. Baba, S.J. Lee, A.G. Weisenberger, A. Stolin, J. McKisson, M.F. Smith

    2011-06-01

    Development has continued with intrinsic feature optical motion tracking for awake animal imaging to measure 3D position and orientation (pose) for motion compensated reconstruction. Prior imaging results have been directed towards head motion measurement for SPECT brain studies in awake unrestrained mice. This work improves on those results in extracting and tracking intrinsic features from multiple camera images and computing pose changes from the tracked features over time. Previously, most motion tracking for 3D imaging has been limited to measuring extrinsic features such as retro-reflective markers applied to an animal's head. While this approach has been proven to be accurate, the use of external markers is undesirable for several reasons. The intrinsic feature approach has been further developed from previous work to provide full pose measurements for a live mouse scan. Surface feature extraction, matching, and pose change calculation with point tracking and accuracy results are described. Experimental pose calculation and 3D reconstruction results from live images are presented.

  10. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  11. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  12. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  13. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  14. Optimization of multi-image pose recovery of fluoroscope tracking (FTRAC) fiducial in an image-guided femoroplasty system

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Armand, Mehran; Otake, Yoshito; Taylor, Russell H.

    2011-03-01

    Percutaneous femoroplasty [1], or femoral bone augmentation, is a prospective alternative treatment for reducing the risk of fracture in patients with severe osteoporosis. We are developing a surgical robotics system that will assist orthopaedic surgeons in planning and performing a patient-specific, augmentation of the femur with bone cement. This collaborative project, sponsored by the National Institutes of Health (NIH), has been the topic of previous publications [2],[3] from our group. This paper presents modifications to the pose recovery of a fluoroscope tracking (FTRAC) fiducial during our process of 2D/3D registration of X-ray intraoperative images to preoperative CT data. We show improved automata of the initial pose estimation as well as lower projection errors with the advent of a multiimage pose optimization step.

  15. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  16. Posing Problems: Two Classroom Examples.

    ERIC Educational Resources Information Center

    Leung, Shukkwan S.; Wu, Rui-xiang

    1999-01-01

    Shares two lessons in which students help teachers pose problems and discover the importance of posing problems properly. Presents a fifth-grade lesson in which students found a mistake in a proportion problem, and an eighth-grade lesson that discusses a geometry problem with insufficient information. (ASK)

  17. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  18. Context-based automatic reconstruction and texturing of 3D urban terrain for quick-response tasks

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Meidow, Jochen; Pohl, Melanie; Solbrig, Peter; Wernerus, Peter

    2014-07-01

    Highly detailed 3D urban terrain models are the base for quick response tasks with indispensable human participation, e.g., disaster management. Thus, it is important to automate and accelerate the process of urban terrain modeling from sensor data such that the resulting 3D model is semantic, compact, recognizable, and easily usable for training and simulation purposes. To provide essential geometric attributes, buildings and trees must be identified among elevated objects in digital surface models. After building ground-plan estimation and roof details analysis, images from oblique airborne imagery are used to cover building faces with up-to-date texture thus achieving a better recognizability of the model. The three steps of the texturing procedure are sensor pose estimation, assessment of polygons projected into the images, and texture synthesis. Free geographic data, providing additional information about streets, forest areas, and other topographic object types, suppress false alarms and enrich the reconstruction results.

  19. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  20. Robust elastic 2D/3D geometric graph matching

    NASA Astrophysics Data System (ADS)

    Serradell, Eduard; Kybic, Jan; Moreno-Noguer, Francesc; Fua, Pascal

    2012-02-01

    We present an algorithm for geometric matching of graphs embedded in 2D or 3D space. It is applicable for registering any graph-like structures appearing in biomedical images, such as blood vessels, pulmonary bronchi, nerve fibers, or dendritic arbors. Our approach does not rely on the similarity of local appearance features, so it is suitable for multimodal registration with a large difference in appearance. Unlike earlier methods, the algorithm uses edge shape, does not require an initial pose estimate, can handle partial matches, and can cope with nonlinear deformations and topological differences. The matching consists of two steps. First, we find an affine transform that roughly aligns the graphs by exploring the set of all consistent correspondences between the nodes. This can be done at an acceptably low computational expense by using parameter uncertainties for pruning, backtracking as needed. Parameter uncertainties are updated in a Kalman-like scheme with each match. In the second step we allow for a nonlinear part of the deformation, modeled as a Gaussian Process. Short sequences of edges are grouped into superedges, which are then matched between graphs. This allows for topological differences. A maximum consistent set of superedge matches is found using a dedicated branch-and-bound solver, which is over 100 times faster than a standard linear programming approach. Geometrical and topological consistency of candidate matches is determined in a fast hierarchical manner. We demonstrate the effectiveness of our technique at registering angiography and retinal fundus images, as well as neural image stacks.

  1. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  2. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  3. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  4. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  5. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  6. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  7. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  8. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  9. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  10. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  11. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  12. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967

  13. 3D surface analysis and classification in neuroimaging segmentation.

    PubMed

    Zagar, Martin; Mlinarić, Hrvoje; Knezović, Josip

    2011-06-01

    This work emphasizes new algorithms for 3D edge and corner detection used in surface extraction and new concept of image segmentation in neuroimaging based on multidimensional shape analysis and classification. We propose using of NifTI standard for describing input data which enables interoperability and enhancement of existing computing tools used widely in neuroimaging research. In methods section we present our newly developed algorithm for 3D edge and corner detection, together with the algorithm for estimating local 3D shape. Surface of estimated shape is analyzed and segmented according to kernel shapes. PMID:21755723

  14. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  15. Integrated Biogeomorphological Modeling Using Delft3D

    NASA Astrophysics Data System (ADS)

    Ye, Q.; Jagers, B.

    2011-12-01

    The skill of numerical morphological models has improved significantly from the early 2D uniform, total load sediment models (with steady state or infrequent wave updates) to recent 3D hydrodynamic models with multiple suspended and bed load sediment fractions and bed stratigraphy (online coupled with waves). Although there remain many open questions within this combined field of hydro- and morphodynamics, we observe an increasing need to include biological processes in the overall dynamics. In riverine and inter-tidal environments, there is often an important influence by riparian vegetation and macrobenthos. Over the past decade more and more researchers have started to extend the simulation environment with wrapper scripts and other quick code hacks to estimate their influence on morphological development in coastal, estuarine and riverine environments. Although one can in this way quickly analyze different approaches, these research tools have generally not been designed with reuse, performance and portability in mind. We have now implemented a reusable, flexible, and efficient two-way link between the Delft3D open source framework for hydrodynamics, waves and morphology, and the water quality and ecology modules. The same link will be used for 1D, 2D and 3D modeling on networks and both structured and unstructured grids. We will describe the concepts of the overall system, and illustrate it with some first results.

  16. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  17. Timescales of quartz crystallization estimated from glass inclusion faceting using 3D propagation phase-contrast x-ray tomography: examples from the Bishop (California, USA) and Oruanui (Taupo Volcanic Zone, New Zealand) Tuffs

    NASA Astrophysics Data System (ADS)

    Pamukcu, A.; Gualda, G. A.; Anderson, A. T.

    2012-12-01

    Compositions of glass inclusions have long been studied for the information they provide on the evolution of magma bodies. Textures - sizes, shapes, positions - of glass inclusions have received less attention, but they can also provide important insight into magmatic processes, including the timescales over which magma bodies develop and erupt. At magmatic temperatures, initially round glass inclusions will become faceted (attain a negative crystal shape) through the process of dissolution and re-precipitation, such that the extent to which glass inclusions are faceted can be used to estimate timescales. The size and position of the inclusion within a crystal will influence how much faceting occurs: a larger inclusion will facet more slowly; an inclusion closer to the rim will have less time to facet. As a result, it is critical to properly document the size, shape, and position of glass inclusions to assess faceting timescales. Quartz is an ideal mineral to study glass inclusion faceting, as Si is the only diffusing species of concern, and Si diffusion rates are relatively well-constrained. Faceting time calculations to date (Gualda et al., 2012) relied on optical microscopy to document glass inclusions. Here we use 3D propagation phase-contrast x-ray tomography to image glass inclusions in quartz. This technique enhances inclusion edges such that images can be processed more successfully than with conventional tomography. We have developed a set of image processing tools to isolate inclusions and more accurately obtain information on the size, shape, and position of glass inclusions than with optical microscopy. We are studying glass inclusions from two giant tuffs. The Bishop Tuff is ~1000 km3 of high-silica rhyolite ash fall, ignimbrite, and intracaldera deposits erupted ~760 ka in eastern California (USA). Glass inclusions in early-erupted Bishop Tuff range from non-faceted to faceted, and faceting times determined using both optical microscopy and x

  18. Videometrics technology of flyers' pose

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoli; Su, Xiuqin; Zhang, Sanxi; Liu, Biao; Zhou, Zhiqiang

    2015-10-01

    In this paper pose measurement refers to flying pose measurement of rigid body including the pitch angle, yaw angel and roll angle. Pose measurement is of vital importance for such items as weapons settings, fault analysis and optimation design. Pose measurement based on optical images has many merits such as intuitive and non-contacted, which is a main method to measure pose currently. According to the parameters used and principle of the algorithms, the existing methods of pose measurement based on optical images are classified systematically and comprehensively for the first time as following: the methods of one station un-using camera's inner parameters are divided into the feature length ratio method and the direct linear transformation(DLT )method, otherwise they are divided into the perspective n points(PNP)problem and the optical and radar integration method, the axes from planes intersection using two stations extensible to multistation, and model matching applied to one or more stations, and then they are comparatively analyzed .At last combined with practical applications such as one or more stations, have or no model and inner parameters used or unused, some selection and improvement of key points are given practically.

  19. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  20. Inferential modeling of 3D chromatin structure

    PubMed Central

    Wang, Siyu; Xu, Jinbo; Zeng, Jianyang

    2015-01-01

    For eukaryotic cells, the biological processes involving regulatory DNA elements play an important role in cell cycle. Understanding 3D spatial arrangements of chromosomes and revealing long-range chromatin interactions are critical to decipher these biological processes. In recent years, chromosome conformation capture (3C) related techniques have been developed to measure the interaction frequencies between long-range genome loci, which have provided a great opportunity to decode the 3D organization of the genome. In this paper, we develop a new Bayesian framework to derive the 3D architecture of a chromosome from 3C-based data. By modeling each chromosome as a polymer chain, we define the conformational energy based on our current knowledge on polymer physics and use it as prior information in the Bayesian framework. We also propose an expectation-maximization (EM) based algorithm to estimate the unknown parameters of the Bayesian model and infer an ensemble of chromatin structures based on interaction frequency data. We have validated our Bayesian inference approach through cross-validation and verified the computed chromatin conformations using the geometric constraints derived from fluorescence in situ hybridization (FISH) experiments. We have further confirmed the inferred chromatin structures using the known genetic interactions derived from other studies in the literature. Our test results have indicated that our Bayesian framework can compute an accurate ensemble of 3D chromatin conformations that best interpret the distance constraints derived from 3C-based data and also agree with other sources of geometric constraints derived from experimental evidence in the previous studies. The source code of our approach can be found in https://github.com/wangsy11/InfMod3DGen. PMID:25690896

  1. Learning the spherical harmonic features for 3-D face recognition.

    PubMed

    Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming

    2013-03-01

    In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method. PMID:23060332

  2. 3D Modeling from Photos Given Topological Information.

    PubMed

    Kim, Young Min; Cho, Junghyun; Ahn, Sang Chul

    2016-09-01

    Reconstructing 3D models given a single-view 2D information is inherently an ill-posed problem and requires additional information such as shape prior or user input.We introduce a method to generate multiple 3D models of a particular category given corresponding photographs when the topological information is known. While there is a wide range of shapes for an object of a particular category, the basic topology usually remains constant.In consequence, the topological prior needs to be provided only once for each category and can be easily acquired by consulting an existing database of 3D models or by user input. The input of topological description is only connectivity information between parts; this is in contrast to previous approaches that have required users to interactively mark individual parts. Given the silhouette of an object and the topology, our system automatically finds a skeleton and generates a textured 3D model by jointly fitting multiple parts. The proposed method, therefore, opens the possibility of generating a large number of 3D models by consulting a massive number of photographs. We demonstrate examples of the topological prior and reconstructed 3D models using photos. PMID:26661474

  3. Sparse Feature Extraction for Pose-Tolerant Face Recognition.

    PubMed

    Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios

    2014-10-01

    Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles. PMID:26352635

  4. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  5. A review of recent advances in 3D face recognition

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Geng, Shuze; Xiao, Zhaoxia; Xiu, Chunbo

    2015-03-01

    Face recognition based on machine vision has achieved great advances and been widely used in the various fields. However, there are some challenges on the face recognition, such as facial pose, variations in illumination, and facial expression. So, this paper gives the recent advances in 3D face recognition. 3D face recognition approaches are categorized into four groups: minutiae approach, space transform approach, geometric features approach, model approach. Several typical approaches are compared in detail, including feature extraction, recognition algorithm, and the performance of the algorithm. Finally, this paper summarized the challenge existing in 3D face recognition and the future trend. This paper aims to help the researches majoring on face recognition.

  6. Investigation of MM-PBSA rescoring of docking poses.

    PubMed

    Thompson, David C; Humblet, Christine; Joseph-McCarthy, Diane

    2008-05-01

    Target-based virtual screening is increasingly used to generate leads for targets for which high quality three-dimensional (3D) structures are available. To allow large molecular databases to be screened rapidly, a tiered scoring scheme is often employed whereby a simple scoring function is used as a fast filter of the entire database and a more rigorous and time-consuming scoring function is used to rescore the top hits to produce the final list of ranked compounds. Molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) approaches are currently thought to be quite effective at incorporating implicit solvation into the estimation of ligand binding free energies. In this paper, the ability of a high-throughput MM-PBSA rescoring function to discriminate between correct and incorrect docking poses is investigated in detail. Various initial scoring functions are used to generate docked poses for a subset of the CCDC/Astex test set and to dock one set of actives/inactives from the DUD data set. The effectiveness of each of these initial scoring functions is discussed. Overall, the ability of the MM-PBSA rescoring function to (i) regenerate the set of X-ray complexes when docking the bound conformation of the ligand, (ii) regenerate the X-ray complexes when docking conformationally expanded databases for each ligand which include "conformation decoys" of the ligand, and (iii) enrich known actives in a virtual screen for the mineralocorticoid receptor in the presence of "ligand decoys" is assessed. While a pharmacophore-based molecular docking approach, PhDock, is used to carry out the docking, the results are expected to be general to use with any docking method. PMID:18465849

  7. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  8. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  9. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  10. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  11. 3-D reconstruction of the spine from biplanar radiographs based on contour matching using the Hough transform.

    PubMed

    Zhang, Junhua; Lv, Liang; Shi, Xinling; Wang, Yuanyuan; Guo, Fei; Zhang, Yufeng; Li, Hongjian

    2013-07-01

    The purpose of this study was to develop and evaluate a method for three-dimensional (3-D) reconstruction of the spine from biplanar radiographs. The approach was based on vertebral contour matching for estimating vertebral orientations and locations. Vertebral primitives were initially positioned under constraint of the 3-D spine midline, which was estimated from manually identified control points. Vertebral orientations and locations were automatically adjusted by matching projections of 3-D primitives with vertebral edges on biplanar radiographs based on the generalized Hough transform technique with a deformation tolerant matching strategy. We used graphics processing unit to accelerate reconstruction. Accuracy and precision were evaluated using radiographs from 15 scoliotic patients and a spine model in 24 poses. On in vivo radiographs, accuracy was within 2.8° for orientation and 2.4 mm for location; precision was within 2.3° for orientation and 2.1 mm for location. results were slightly better on model radiographs than on in vivo radiographs but without significance (p>0.05). The duration for user intervention was less than 2 min, and the computation time was within 3 min. Results indicated the method's reliability. It is a promising tool to determine 3-D spinal geometry with acceptable user interaction. PMID:23412567

  12. Enhanced Rgb-D Mapping Method for Detailed 3d Modeling of Large Indoor Environments

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-06-01

    RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method

  13. An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2015-12-01

    Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  14. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  15. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  16. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  17. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. Ne