Science.gov

Sample records for 3d pose estimation

  1. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  2. Fast human pose estimation using 3D Zernike descriptors

    NASA Astrophysics Data System (ADS)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  3. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  4. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  5. Head pose free 3D gaze estimation using RGB-D camera

    NASA Astrophysics Data System (ADS)

    Kacete, Amine; Séguier, Renaud; Collobert, Michel; Royan, Jérôme

    2017-02-01

    In this paper, we propose an approach for 3D gaze estimation under head pose variation using RGB-D camera. Our method uses a 3D eye model to determine the 3D optical axis and infer the 3D visual axis. For this, we estimate robustly user head pose parameters and eye pupil locations with an ensembles of randomized trees trained with an important annotated training sets. After projecting eye pupil locations in the sensor coordinate system using the sensor intrinsic parameters and a one-time simple calibration by gazing a known 3D target under different directions, the 3D eyeball centers are determined for a specific user for both eyes yielding the determination of the visual axis. Experimental results demonstrate that our method shows a good gaze estimation accuracy even if the environment is highly unconstrained namely large user-sensor distances (> 1m50) unlike state-of-the-art methods which deal with relatively small distances (<1m).

  6. An evaluation of 3D head pose estimation using the Microsoft Kinect v2.

    PubMed

    Darby, John; Sánchez, María B; Butler, Penelope B; Loram, Ian D

    2016-07-01

    The Kinect v2 sensor supports real-time non-invasive 3D head pose estimation. Because the sensor is small, widely available and relatively cheap it has great potential as a tool for groups interested in measuring head posture. In this paper we compare the Kinect's head pose estimates with a marker-based record of ground truth in order to establish its accuracy. During movement of the head and neck alone (with static torso), we find average errors in absolute yaw, pitch and roll angles of 2.0±1.2°, 7.3±3.2° and 2.6±0.7°, and in rotations relative to the rest pose of 1.4±0.5°, 2.1±0.4° and 2.0±0.8°. Larger head rotations where it becomes difficult to see facial features can cause estimation to fail (10.2±6.1% of all poses in our static torso range of motion tests) but we found no significant changes in performance with the participant standing further away from Kinect - additionally enabling full-body pose estimation - or without performing face shape calibration, something which is not always possible for younger or disabled participants. Where facial features remain visible, the sensor has applications in the non-invasive assessment of postural control, e.g. during a programme of physical therapy. In particular, a multi-Kinect setup covering the full range of head (and body) movement would appear to be a promising way forward.

  7. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  8. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  9. Integration of camera and range sensors for 3D pose estimation in robot visual servoing

    NASA Astrophysics Data System (ADS)

    Hulls, Carol C. W.; Wilson, William J.

    1998-10-01

    Range-vision sensor systems can incorporate range images or single point measurements. Research incorporating point range measurements has focused on the area of map generation for mobile robots. These systems can utilize the fact that the objects sensed tend to be large and planar. The approach presented in this paper fuses information obtained from a point range measurement with visual information to produce estimates of the relative 3D position and orientation of a small, non-planar object with respect to a robot end- effector. The paper describes a real-time sensor fusion system for performing dynamic visual servoing using a camera and a point laser range sensor. The system is based upon the object model reference approach. This approach, which can be used to develop multi-sensor fusion systems that fuse dynamic sensor data from diverse sensors in real-time, uses a description of the object to be sensed in order to develop a combined observation-dependency sensor model. The range- vision sensor system is evaluated in terms of accuracy and robustness. The results show that the use of a range sensor significantly improves the system performance when there is poor or insufficient camera information. The system developed is suitable for visual servoing applications, particularly robot assembly operations.

  10. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  11. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  12. ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation

    NASA Technical Reports Server (NTRS)

    Richardson, A. O.

    1996-01-01

    This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.

  13. Neural network system for 3-D object recognition and pose estimation from a single arbitrary 2-D view

    NASA Astrophysics Data System (ADS)

    Khotanzad, Alireza R.; Liou, James H.

    1992-09-01

    In this paper, a robust, and fast system for recognition as well as pose estimation of a 3-D object from a single 2-D perspective of it taken from an arbitrary viewpoint is developed. The approach is invariant to location, orientation, and scale of the object in the perspective. The silhouette of the object in the 2-D perspective is first normalized with respect to location and scale. A set of rotation invariant features derived from complex and orthogonal pseudo- Zernike moments of the image are then extracted. The next stage includes a bank of multilayer feed-forward neural networks (NN) each of which classifies the extracted features. The training set for these nets consists of perspective views of each object taken from several different viewing angles. The NNs in the bank differ in the size of their hidden layer nodes as well as their initial conditions but receive the same input. The classification decisions of all the nets are combined through a majority voting scheme. It is shown that this collective decision making yields better results compared to a single NN operating alone. After the object is classified, two of its pose parameters, namely elevation and aspect angles, are estimated by another module of NNs in a two-stage process. The first stage identifies the likely region of the space that the object is being viewed from. In the second stage, an NN estimator for the identified region is used to compute the pose angles. Extensive experimental studies involving clean and noisy images of seven military ground vehicles are carried out. The performance is compared to two other traditional methods, namely a nearest neighbor rule and a binary decision tree classifier and it is shown that our approach has major advantages over them.

  14. 3-D Pose Presentation for Training Applications

    ERIC Educational Resources Information Center

    Fox, Kaitlyn; Whitehead, Anthony

    2011-01-01

    Purpose: In the authors' experience, the biggest issue with pose-based exergames is the difficulty in effectively communicating a three-dimensional pose to a user to facilitate a thorough understanding for accurate pose replication. The purpose of this paper is to examine options for pose presentation. Design/methodology/approach: The authors…

  15. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  16. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  17. Pose detection of a 3D object using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2016-09-01

    The problem of 3D pose recognition of a rigid object is difficult to solve because the pose in a 3D space can vary with multiple degrees of freedom. In this work, we propose an accurate method for 3D pose estimation based on template matched filtering. The proposed method utilizes a bank of space-variant filters which take into account different pose states of the target and local statistical properties of the input scene. The state parameters of location coordinates, orientation angles, and scaling parameters of the target are estimated with high accuracy in the input scene. Experimental tests are performed for real and synthetic scenes. The proposed system yields good performance for 3D pose recognition in terms of detection efficiency, location and orientation errors.

  18. Piecewise-rigid 2D-3D registration for pose estimation of snake-like manipulator using an intraoperative x-ray projection

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Kutzer, M. D.; Taylor, R. H.; Armand, M.

    2014-03-01

    Background: Snake-like dexterous manipulators may offer significant advantages in minimally-invasive surgery in areas not reachable with conventional tools. Precise control of a wire-driven manipulator is challenging due to factors such as cable deformation, unknown internal (cable friction) and external forces, thus requiring correcting the calibration intraoperatively by determining the actual pose of the manipulator. Method: A method for simultaneously estimating pose and kinematic configuration of a piecewise-rigid object such as a snake-like manipulator from a single x-ray projection is presented. The method parameterizes kinematics using a small number of variables (e.g., 5), and optimizes them simultaneously with the 6 degree-of-freedom pose parameter of the base link using an image similarity between digitally reconstructed radiographs (DRRs) of the manipulator's attenuation model and the real x-ray projection. Result: Simulation studies assumed various geometric magnifications (1.2-2.6) and out-of-plane angulations (0°-90°) in a scenario of hip osteolysis treatment, which demonstrated the median joint angle error was 0.04° (for 2.0 magnification, +/-10° out-of-plane rotation). Average computation time was 57.6 sec with 82,953 function evaluations on a mid-range GPU. The joint angle error remained lower than 0.07° while out-of-plane rotation was 0°-60°. An experiment using video images of a real manipulator demonstrated a similar trend as the simulation study except for slightly larger error around the tip attributed to accumulation of errors induced by deformation around each joint not modeled with a simple pin joint. Conclusions: The proposed approach enables high precision tracking of a piecewise-rigid object (i.e., a series of connected rigid structures) using a single projection image by incorporating prior knowledge about the shape and kinematic behavior of the object (e.g., each rigid structure connected by a pin joint parameterized by a

  19. Pose invariant face recognition: 3D model from single photo

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Alfalou, Ayman

    2017-02-01

    Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.

  20. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  1. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking

    PubMed Central

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-01-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object’s pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  2. Using Facial Symmetry to Handle Pose Variations in Real-World 3D Face Recognition.

    PubMed

    Passalis, Georgios; Perakis, Panagiotis; Theoharis, Theoharis; Kakadiaris, Ioannis A

    2011-10-01

    The uncontrolled conditions of real-world biometric applications pose a great challenge to any face recognition approach. The unconstrained acquisition of data from uncooperative subjects may result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions, resulting in missing data. In this paper, a novel 3D face recognition method is proposed that uses facial symmetry to handle pose variations. It employs an automatic landmark detector that estimates pose and detects occluded areas for each facial scan. Subsequently, an Annotated Face Model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data. The result is a pose invariant geometry image. Unlike existing methods that require frontal scans, the proposed method performs comparisons among interpose scans using a wavelet-based biometric signature. It is suitable for real-world applications as it only requires half of the face to be visible to the sensor. The proposed method was evaluated using databases from the University of Notre Dame and the University of Houston that, to the best of our knowledge, include the most challenging pose variations publicly available. The average rank-one recognition rate of the proposed method in these databases was 83.7 percent.

  3. MINACE-filter-based facial pose estimation

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Patnaik, Rohit

    2005-03-01

    A facial pose estimation system is presented that functions with illumination variations present. Pose estimation is a useful first stage in a face recognition system. A separate minimum noise and correlation energy (MINACE) filter is synthesized for each pose. To select the MINACE parameter c for the filter for a pose, a training set of illumination differences of several faces at that pose, and a validation set of other poses (illumination differences of several faces at a few other poses) is used in the automated filter-synthesis step. However, the filter for each pose is a combination of faces at only that pose. The pose estimation system is evaluated using images from the CMU Pose, Illumination and Expression (PIE) database. The classification performance (PC) scores are presented for several pose estimation tests. The pose estimate will be used for a subsequent image transformation of a test face to a reference pose for face identification.

  4. A pose prediction approach based on ligand 3D shape similarity

    NASA Astrophysics Data System (ADS)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2016-06-01

    Molecular docking predicts the best pose of a ligand in the target protein binding site by sampling and scoring numerous conformations and orientations of the ligand. Failures in pose prediction are often due to either insufficient sampling or scoring function errors. To improve the accuracy of pose prediction by tackling the sampling problem, we have developed a method of pose prediction using shape similarity. It first places a ligand conformation of the highest 3D shape similarity with known crystal structure ligands into protein binding site and then refines the pose by repacking the side-chains and performing energy minimization with a Monte Carlo algorithm. We have assessed our method utilizing CSARdock 2012 and 2014 benchmark exercise datasets consisting of co-crystal structures from eight proteins. Our results revealed that ligand 3D shape similarity could substitute conformational and orientational sampling if at least one suitable co-crystal structure is available. Our method identified poses within 2 Å RMSD as the top-ranking pose for 85.7 % of the test cases. The median RMSD for our pose prediction method was found to be 0.81 Å and was better than methods performing extensive conformational and orientational sampling within target protein binding sites. Furthermore, our method was better than similar methods utilizing ligand 3D shape similarity for pose prediction.

  5. Generalized Hough transform based time invariant action recognition with 3D pose information

    NASA Astrophysics Data System (ADS)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  6. A Gaussian process guided particle filter for tracking 3D human pose in video.

    PubMed

    Sedai, Suman; Bennamoun, Mohammed; Huynh, Du Q

    2013-11-01

    In this paper, we propose a hybrid method that combines Gaussian process learning, a particle filter, and annealing to track the 3D pose of a human subject in video sequences. Our approach, which we refer to as annealed Gaussian process guided particle filter, comprises two steps. In the training step, we use a supervised learning method to train a Gaussian process regressor that takes the silhouette descriptor as an input and produces multiple output poses modeled by a mixture of Gaussian distributions. In the tracking step, the output pose distributions from the Gaussian process regression are combined with the annealed particle filter to track the 3D pose in each frame of the video sequence. Our experiments show that the proposed method does not require initialization and does not lose tracking of the pose. We compare our approach with a standard annealed particle filter using the HumanEva-I dataset and with other state of the art approaches using the HumanEva-II dataset. The evaluation results show that our approach can successfully track the 3D human pose over long video sequences and give more accurate pose tracking results than the annealed particle filter.

  7. Texture mapping 3D models of indoor environments with noisy camera poses

    NASA Astrophysics Data System (ADS)

    Cheng, Peter; Anderson, Michael; He, Stewart; Zakhor, Avideh

    2013-03-01

    Automated 3D modeling of building interiors is used in applications such as virtual reality and environment mapping. Texturing these models allows for photo-realistic visualizations of the data collected by such modeling systems. While data acquisition times for mobile mapping systems are considerably shorter than for static ones, their recovered camera poses often suffer from inaccuracies, resulting in visible discontinuities when successive images are projected onto a surface for texturing. We present a method for texture mapping models of indoor environments that starts by selecting images whose camera poses are well-aligned in two dimensions. We then align images to geometry as well as to each other, producing visually consistent textures even in the presence of inaccurate surface geometry and noisy camera poses. Images are then composited into a final texture mosaic and projected onto surface geometry for visualization. The effectiveness of the proposed method is demonstrated on a number of different indoor environments.

  8. Joint albedo estimation and pose tracking from video.

    PubMed

    Taheri, Sima; Sankaranarayanan, Aswin C; Chellappa, Rama

    2013-07-01

    The albedo of a Lambertian object is a surface property that contributes to an object's appearance under changing illumination. As a signature independent of illumination, the albedo is useful for object recognition. Single image-based albedo estimation algorithms suffer due to shadows and non-Lambertian effects of the image. In this paper, we propose a sequential algorithm to estimate the albedo from a sequence of images of a known 3D object in varying poses and illumination conditions. We first show that by knowing/estimating the pose of the object at each frame of a sequence, the object's albedo can be efficiently estimated using a Kalman filter. We then extend this for the case of unknown pose by simultaneously tracking the pose as well as updating the albedo through a Rao-Blackwellized particle filter (RBPF). More specifically, the albedo is marginalized from the posterior distribution and estimated analytically using the Kalman filter, while the pose parameters are estimated using importance sampling and by minimizing the projection error of the face onto its spherical harmonic subspace, which results in an illumination-insensitive pose tracking algorithm. Illustrations and experiments are provided to validate the effectiveness of the approach using various synthetic and real sequences followed by applications to unconstrained, video-based face recognition.

  9. Appearance learning for 3D pose detection of a satellite at close-range

    NASA Astrophysics Data System (ADS)

    Oumer, Nassir W.; Kriegel, Simon; Ali, Haider; Reinartz, Peter

    2017-03-01

    In this paper we present a learning-based 3D detection of a highly challenging specular object exposed to a direct sunlight at very close-range. An object detection is one of the most important areas of image processing, and can also be used for initialization of local visual tracking methods. While the object detection in 3D space is generally a difficult problem, it poses more difficulties when the object is specular and exposed to the direct sunlight as in a space environment. Our solution to a such problem relies on an appearance learning of a real satellite mock-up based on a vector quantization and the vocabulary tree. Our method, implemented on a standard computer (CPU), exploits a full perspective projection model and provides near real-time 3D pose detection of a satellite for close-range approach and manipulation. The time consuming part of the training (feature description, building the vocabulary tree and indexing, depth buffering and back-projection) are performed offline, while a fast image retrieval and 3D-2D registration are performed on-line. In contrast, the state of the art image-based 3D pose detection methods are slower on CPU or assume a weak perspective camera projection model. In our case the dimension of the satellite is larger than the distance to the camera, hence the assumption of the weak perspective model does not hold. To evaluate the proposed method, the appearance of a full scale mock-up of the rear part of the TerraSAR-X satellite is trained under various illumination and camera views. The training images are captured with a camera mounted on six degrees of freedom robot, which enables to position the camera in a desired view, sampled over a sphere. The views that are not within the workspace of the robot are interpolated using image-based rendering. Moreover, we generate ground truth poses to verify the accuracy of the detection algorithm. The achieved results are robust and accurate even under noise due to specular reflection

  10. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  11. Space vehicle pose estimation via optical correlation and nonlinear estimation

    NASA Astrophysics Data System (ADS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-03-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  12. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  13. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  14. Robotic-surgical instrument wrist pose estimation.

    PubMed

    Fabel, Stephan; Baek, Kyungim; Berkelman, Peter

    2010-01-01

    The Compact Lightweight Surgery Robot from the University of Hawaii includes two teleoperated instruments and one endoscope manipulator which act in accord to perform assisted interventional medicine. The relative positions and orientations of the robotic instruments and endoscope must be known to the teleoperation system so that the directions of the instrument motions can be controlled to correspond closely to the directions of the motions of the master manipulators, as seen by the the endoscope and displayed to the surgeon. If the manipulator bases are mounted in known locations and all manipulator joint variables are known, then the necessary coordinate transformations between the master and slave manipulators can be easily computed. The versatility and ease of use of the system can be increased, however, by allowing the endoscope or instrument manipulator bases to be moved to arbitrary positions and orientations without reinitializing each manipulator or remeasuring their relative positions. The aim of this work is to find the pose of the instrument end effectors using the video image from the endoscope camera. The P3P pose estimation algorithm is used with a Levenberg-Marquardt optimization to ensure convergence. The correct transformations between the master and slave coordinate frames can then be calculated and updated when the bases of the endoscope or instrument manipulators are moved to new, unknown, positions at any time before or during surgical procedures.

  15. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  16. Automatic pose initialization for accurate 2D/3D registration applied to abdominal aortic aneurysm endovascular repair

    NASA Astrophysics Data System (ADS)

    Miao, Shun; Lucas, Joseph; Liao, Rui

    2012-02-01

    Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative 3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods based on simulated X-ray projection images using multiple image planes have been shown to be able to provide satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima. User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful 2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our previously proposed bi-plane 2-D/3-D registration method for AAA intervention [4]. The proposed method detects vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the 3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is shown to be able to provide a good initialization for 2-D/3-D registration in [4], thus making the workflow fully automatic.

  17. Pose Estimation and Mapping Using Catadioptric Cameras with Spherical Mirrors

    NASA Astrophysics Data System (ADS)

    Ilizirov, Grigory; Filin, Sagi

    2016-06-01

    Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system's parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.

  18. Spatio-Temporal Matching for Human Pose Estimation in Video.

    PubMed

    Zhou, Feng; Torre, Fernando De la

    2016-08-01

    Detection and tracking humans in videos have been long-standing problems in computer vision. Most successful approaches (e.g., deformable parts models) heavily rely on discriminative models to build appearance detectors for body joints and generative models to constrain possible body configurations (e.g., trees). While these 2D models have been successfully applied to images (and with less success to videos), a major challenge is to generalize these models to cope with camera views. In order to achieve view-invariance, these 2D models typically require a large amount of training data across views that is difficult to gather and time-consuming to label. Unlike existing 2D models, this paper formulates the problem of human detection in videos as spatio-temporal matching (STM) between a 3D motion capture model and trajectories in videos. Our algorithm estimates the camera view and selects a subset of tracked trajectories that matches the motion of the 3D model. The STM is efficiently solved with linear programming, and it is robust to tracking mismatches, occlusions and outliers. To the best of our knowledge this is the first paper that solves the correspondence between video and 3D motion capture data for human pose detection. Experiments on the CMU motion capture, Human3.6M, Berkeley MHAD and CMU MAD databases illustrate the benefits of our method over state-of-the-art approaches.

  19. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  20. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  1. Exhaustive linearization for robust camera pose and focal length estimation.

    PubMed

    Penate-Sanchez, Adrian; Andrade-Cetto, Juan; Moreno-Noguer, Francesc

    2013-10-01

    We propose a novel approach for the estimation of the pose and focal length of a camera from a set of 3D-to-2D point correspondences. Our method compares favorably to competing approaches in that it is both more accurate than existing closed form solutions, as well as faster and also more accurate than iterative ones. Our approach is inspired on the EPnP algorithm, a recent O(n) solution for the calibrated case. Yet we show that considering the focal length as an additional unknown renders the linearization and relinearization techniques of the original approach no longer valid, especially with large amounts of noise. We present new methodologies to circumvent this limitation termed exhaustive linearization and exhaustive relinearization which perform a systematic exploration of the solution space in closed form. The method is evaluated on both real and synthetic data, and our results show that besides producing precise focal length estimation, the retrieved camera pose is almost as accurate as the one computed using the EPnP, which assumes a calibrated camera.

  2. A Survey on Model Based Approaches for 2D and 3D Visual Human Pose Recovery

    PubMed Central

    Perez-Sala, Xavier; Escalera, Sergio; Angulo, Cecilio; Gonzàlez, Jordi

    2014-01-01

    Human Pose Recovery has been studied in the field of Computer Vision for the last 40 years. Several approaches have been reported, and significant improvements have been obtained in both data representation and model design. However, the problem of Human Pose Recovery in uncontrolled environments is far from being solved. In this paper, we define a general taxonomy to group model based approaches for Human Pose Recovery, which is composed of five main modules: appearance, viewpoint, spatial relations, temporal consistence, and behavior. Subsequently, a methodological comparison is performed following the proposed taxonomy, evaluating current SoA approaches in the aforementioned five group categories. As a result of this comparison, we discuss the main advantages and drawbacks of the reviewed literature. PMID:24594613

  3. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  4. Spherical blurred shape model for 3-D object and pose recognition: quantitative analysis and HCI applications in smart environments.

    PubMed

    Lopes, Oscar; Reyes, Miguel; Escalera, Sergio; Gonzàlez, Jordi

    2014-12-01

    The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.

  5. Nonlinear Synchronization for Automatic Learning of 3D Pose Variability in Human Motion Sequences

    NASA Astrophysics Data System (ADS)

    Mozerov, M.; Rius, I.; Roca, X.; González, J.

    2009-12-01

    A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.

  6. Head Pose Estimation Without Manual Initialization

    DTIC Science & Technology

    2000-01-01

    time face tracking and gesture recognition . In Proc. International Joint Conference on Artificial Intelligence, volume 2, pages 1525-1530, August 1997...illumination-insensitive head orientation estimation. In Proc. of Int’l Conf on Face and Gesture Recognition , Grenoble, France, March 2000.

  7. A Model-Based 3D Template Matching Technique for Pose Acquisition of an Uncooperative Space Object

    PubMed Central

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

  8. A model-based 3D template matching technique for pose acquisition of an uncooperative space object.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-03-16

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.

  9. The effect of pose variability and repeated reliability of segmental centres of mass acquisition when using 3D photonic scanning.

    PubMed

    Chiu, Chuang-Yuan; Pease, David L; Sanders, Ross H

    2016-12-01

    Three-dimensional (3D) photonic scanning is an emerging technique to acquire accurate body segment parameter data. This study established the repeated reliability of segmental centres of mass when using 3D photonic scanning (3DPS). Seventeen male participants were scanned twice by a 3D whole-body laser scanner. The same operators conducted the reconstruction and segmentation processes to obtain segmental meshes for calculating the segmental centres of mass. The segmental centres of mass obtained from repeated 3DPS were compared by relative technical error of measurement (TEM). Hypothesis tests were conducted to determine the size of change required for each segment to be determined a true variation. The relative TEMs for all segments were less than 5%. The relative changes in centres of mass at ±1.5% for most segments can be detected (p < 0.05). The arm segments which are difficult to keep in the same scanning pose generated more error than other segments. Practitioner Summary: Three-dimensional photonic scanning is an emerging technique to acquire body segment parameter data. This study established the repeated reliability of segmental centres of mass when using 3D photonic scanning and emphasised that the error for arm segments need to be considered while using this technique to acquire centres of mass.

  10. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    PubMed Central

    Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi

    2016-01-01

    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003

  11. Robust head pose estimation via supervised manifold learning.

    PubMed

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations.

  12. A pose estimation method for unmanned ground vehicles in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Tamjidi, Amirhossein; Ye, Cang

    2012-06-01

    This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.

  13. Unsupervised partial volume estimation using 3D and statistical priors

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-07-01

    Our main objective is to compute the volume of interest of images from magnetic resonance imaging (MRI). We suggest a method based on maximum a posteriori. Using texture models, we propose a new partial volume determination. We model tissues using generalized gaussian distributions fitted from a mixture of their gray levels and texture information. Texture information relies on estimation errors from multiresolution and multispectral autoregressive models. A uniform distribution solves large estimation errors, when dealing with unknown tissues. An initial segmentation, needed by the multiresolution segmentation deterministic relaxation algorithm, is found using an anatomical atlas. To model the a priori information, we use a full 3-D extension of Markov random fields. Our 3-D extension is straightforward, easily implemented, and includes single label probability. Using initial segmentation map and initial tissues models, iterative updates are made on the segmentation map and tissue models. Updating tissue models remove field inhomogeneities. Partial volumes are computed from final segmentation map and tissue models. Preliminary results are encouraging.

  14. Onboard camera pose estimation in augmented reality space for direct visual navigation

    NASA Astrophysics Data System (ADS)

    Hu, Zhencheng; Uchimura, Keiichi

    2003-05-01

    This paper presents a dynamical solution of the registration problem for on-road navigation applications via 3D-2D parameterized model matching algorithm. Traditional camera"s three dimensional (3D) position and pose estimation algorithms always employ the fixed and known-structure models as well as the depth information to obtain the 3D-2D correlations, which is however unavailable for on-road navigation applications since there are no fixed models in the general road scene. With the constraints of road structure and on-road navigation features, this paper presents a 2D digital road map based road shape modeling algorithm. Dynamically generated multi-lane road shape models are used to match real road scene to estimate camera 3D position and pose data. Our algorithms successfully simplified the 3D-2D correlation problem to the 2D-2D road model matching on the projective image. The algorithms proposed in this paper are validated with the experimental results from real road test under different conditions and types of road.

  15. Vision based object pose estimation for mobile robots

    NASA Technical Reports Server (NTRS)

    Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry

    1994-01-01

    Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.

  16. UAV to UAV Target Detection and Pose Estimation

    DTIC Science & Technology

    2012-06-01

    using real-life data sets. UAV detection, Pose estimation, Computer Vision, Obstacle Avoidance, Edge Detection, Morphological Filtering. Unclassified...22 Figure 3.3 Light Beacon Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 3.4 Basic Morphological Operations...26 Figure 3.5 Advanced Morphological Operations . . . . . . . . . . . . . . . . . . 26 Figure 3.6 Frame Grabbing

  17. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  18. Depth estimation from multiple coded apertures for 3D interaction

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Choi, Changkyu; Park, Dusik

    2013-09-01

    In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.

  19. Joint Head Pose/Soft Label Estimation for Human Recognition In-The-Wild.

    PubMed

    Proenca, Hugo; Neves, Joao C; Barra, Silvio; Marques, Tiago; Moreno, Juan C

    2016-12-01

    Soft biometrics have been emerging to complement other traits and are particularly useful for poor quality data. In this paper, we propose an efficient algorithm to estimate human head poses and to infer soft biometric labels based on the 3D morphology of the human head. Starting by considering a set of pose hypotheses, we use a learning set of head shapes synthesized from anthropometric surveys to derive a set of 3D head centroids that constitutes a metric space. Next, representing queries by sets of 2D head landmarks, we use projective geometry techniques to rank efficiently the joint 3D head centroids/pose hypotheses according to their likelihood of matching each query. The rationale is that the most likely hypotheses are sufficiently close to the query, so a good solution can be found by convex energy minimization techniques. Once a solution has been found, the 3D head centroid and the query are assumed to have similar morphology, yielding the soft label. Our experiments point toward the usefulness of the proposed solution, which can improve the effectiveness of face recognizers and can also be used as a privacy-preserving solution for biometric recognition in public environments.

  20. Joint Head Pose / Soft Label Estimation for Human Recognition In-The-Wild.

    PubMed

    Proenca, Hugo; Neves, Joao; Marques, Tiago; Barra, Silvio; Moreno, Juan

    2016-01-27

    Soft biometrics have been emerging to complement other traits and are particularly useful for poor quality data. In this paper, we propose an efficient algorithm to estimate human head poses and to infer soft biometric labels based on the 3D morphology of the human head. Starting by considering a set of pose hypotheses, we use a learning set of head shapes synthesized from anthropometric surveys to derive a set of 3D head centroids that constitutes a metric space. Next, representing queries by sets of 2D head landmarks, we use projective geometry techniques to rank efficiently the joint 3D head centroids / pose hypotheses according to their likelihood of matching each query. The rationale is that the most likely hypotheses are sufficiently close to the query, so a good solution can be found by convex energy minimization techniques. Once a solution has been found, the 3D head centroid and the query are assumed to have similar morphology, yielding the soft label. Our experiments point toward the usefulness of the proposed solution, which can improve the effectiveness of face recognizers and can also be used as a privacy-preserving solution for biometric recognition in public environments.

  1. 2-D-3-D frequency registration using a low-dose radiographic system for knee motion estimation.

    PubMed

    Jerbi, Taha; Burdin, Valerie; Leboucher, Julien; Stindel, Eric; Roux, Christian

    2013-03-01

    In this paper, a new method is presented to study the feasibility of the pose and the position estimation of bone structures using a low-dose radiographic system, the entrepreneurial operating system (designed by EOS-Imaging Company). This method is based on a 2-D-3-D registration of EOS bi-planar X-ray images with an EOS 3-D reconstruction. This technique is relevant to such an application thanks to the EOS ability to simultaneously make acquisitions of frontal and sagittal radiographs, and also to produce a 3-D surface reconstruction with its attached software. In this paper, the pose and position of a bone in radiographs is estimated through the link between 3-D and 2-D data. This relationship is established in the frequency domain using the Fourier central slice theorem. To estimate the pose and position of the bone, we define a distance between the 3-D data and the radiographs, and use an iterative optimization approach to converge toward the best estimation. In this paper, we give the mathematical details of the method. We also show the experimental protocol and the results, which validate our approach.

  2. Shape recognition and pose estimation for mobile Augmented Reality.

    PubMed

    Hagbi, Nate; Bergig, Oriel; El-Sana, Jihad; Billinghurst, Mark

    2011-10-01

    Nestor is a real-time recognition and camera pose estimation system for planar shapes. The system allows shapes that carry contextual meanings for humans to be used as Augmented Reality (AR) tracking targets. The user can teach the system new shapes in real time. New shapes can be shown to the system frontally, or they can be automatically rectified according to previously learned shapes. Shapes can be automatically assigned virtual content by classification according to a shape class library. Nestor performs shape recognition by analyzing contour structures and generating projective-invariant signatures from their concavities. The concavities are further used to extract features for pose estimation and tracking. Pose refinement is carried out by minimizing the reprojection error between sample points on each image contour and its library counterpart. Sample points are matched by evolving an active contour in real time. Our experiments show that the system provides stable and accurate registration, and runs at interactive frame rates on a Nokia N95 mobile phone.

  3. 6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features.

    PubMed

    Ye, Cang; Hong, Soonhac; Tamjidi, Amirhossein

    2015-10-01

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and navigational commands to the user through a speech interface.

  4. 3D measurement and camera attitude estimation method based on trifocal tensor

    NASA Astrophysics Data System (ADS)

    Chen, Shengyi; Liu, Haibo; Yao, Linshen; Yu, Qifeng

    2016-11-01

    To simultaneously perform 3D measurement and camera attitude estimation, an efficient and robust method based on trifocal tensor is proposed in this paper, which only employs the intrinsic parameters and positions of three cameras. The initial trifocal tensor is obtained by using heteroscedastic errors-in-variables (HEIV) estimator and the initial relative poses of the three cameras is acquired by decomposing the tensor. Further the initial attitude of the cameras is obtained with knowledge of the three cameras' positions. Then the camera attitude and the interested points' image positions are optimized according to the constraint of trifocal tensor with the HEIV method. Finally the spatial positions of the points are obtained by using intersection measurement method. Both simulation and real image experiment results suggest that the proposed method achieves the same precision of the Bundle Adjustment (BA) method but be more efficient.

  5. Surgical fiducial segmentation and tracking for pose estimation based on ultrasound B-mode images.

    PubMed

    Lei Chen; Kuo, Nathanael; Aalamifar, Fereshteh; Narrow, David; Coon, Devin; Prince, Jerry; Boctor, Emad M

    2016-08-01

    Doppler ultrasound is a non-invasive diagnostic tool for the quantitative measurement of blood flow. However, given that it provides velocity data that is dependent on the location and angle of measurement, repeat measurements to detect problems over time may require an expert to return to the same location. We therefore developed an image-guidance system based on ultrasound B-mode images that enables an inexperienced user to position the ultrasound probe at the same site repeatedly in order to acquire a comparable time series of Doppler readings. The system utilizes a bioresorbable fiducial and complementing software composed of the fiducial detection, key points tracking, probe pose estimation, and graphical user interface (GUI) modules. The fiducial is an echogenic marker that is implanted at the surgical site and can be detected and tracked during ultrasound B-mode screening. The key points on the marker can next be used to determine the pose of the ultrasound probe with respect to the marker. The 3D representation of the probe with its position and orientation are then displayed in the GUI for the user guidance. The fiducial detection has been tested on the data sets collected from three animal studies. The pose estimation algorithm was validated by five data sets collected by a UR5 robot. We tested the system on a plastisol phantom and showed that it can detect and track the fiducial marker while displaying the probe pose in real-time.

  6. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    NASA Astrophysics Data System (ADS)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  7. Pose Estimation from Line Correspondences: A Complete Analysis and A Series of Solutions.

    PubMed

    Xu, Chi; Zhang, Lilian; Cheng, Li; Koch, Reinhard

    2016-06-20

    In this paper we deal with the camera pose estimation problem from a set of 2D/3D line correspondences, which is also known as PnL (Perspective-n-Line) problem. We carry out our study by comparing PnL with the well-studied PnP (Perspective-n-Point) problem, and our contributions are threefold: (1) We provide a complete 3D configuration analysis for P3L, which includes the well-known P3P problem as well as several existing analyses as special cases. (2) By exploring the similarity between PnL and PnP, we propose a new subset-based PnL approach as well as a series of linear-formulation-based PnL approaches inspired by their PnP counterparts. (3) The proposed linear-formulation-based methods can be easily extended to deal with the line and point features simultaneously.

  8. Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes.

    PubMed

    Parks, Daniel; Borji, Ali; Itti, Laurent

    2015-11-01

    Previous studies have shown that gaze direction of actors in a scene influences eye movements of passive observers during free-viewing (Castelhano, Wieth, & Henderson, 2007; Borji, Parks, & Itti, 2014). However, no computational model has been proposed to combine bottom-up saliency with actor's head pose and gaze direction for predicting where observers look. Here, we first learn probability maps that predict fixations leaving head regions (gaze following fixations), as well as fixations on head regions (head fixations), both dependent on the actor's head size and pose angle. We then learn a combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states. This simple structure allows us to inspect the model and make comments about the nature of eye movements originating from heads as opposed to other regions. Here, we assume perfect knowledge of actor head pose direction (from an oracle). The combined model, which we call the Dynamic Weighting of Cues model (DWOC), explains observers' fixations significantly better than each of the constituent components. Finally, in a fully automatic combined model, we replace the oracle head pose direction data with detections from a computer vision model of head pose. Using these (imperfect) automated detections, we again find that the combined model significantly outperforms its individual components. Our work extends the engineering and scientific applications of saliency models and helps better understand mechanisms of visual attention.

  9. Robust endoscopic pose estimation for intraoperative organ-mosaicking

    NASA Astrophysics Data System (ADS)

    Reichard, Daniel; Bodenstedt, Sebastian; Suwelack, Stefan; Wagner, Martin; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-03-01

    The number of minimally invasive procedures is growing every year. These procedures are highly complex and very demanding for the surgeons. It is therefore important to provide intraoperative assistance to alleviate these difficulties. For most computer-assistance systems, like visualizing target structures with augmented reality, a registration step is required to map preoperative data (e.g. CT images) to the ongoing intraoperative scene. Without additional hardware, the (stereo-) endoscope is the prime intraoperative data source and with it, stereo reconstruction methods can be used to obtain 3D models from target structures. To link reconstructed parts from different frames (mosaicking), the endoscope movement has to be known. In this paper, we present a camera tracking method that uses dense depth and feature registration which are combined with a Kalman Filter scheme. It provides a robust position estimation that shows promising results in ex vivo and in silico experiments.

  10. Pose Estimation for Augmented Reality: A Hands-On Survey.

    PubMed

    Marchand, Eric; Uchiyama, Hideaki; Spindler, Fabien

    2016-12-01

    Augmented reality (AR) allows to seamlessly insert virtual objects in an image sequence. In order to accomplish this goal, it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. The solution of this problem can be related to a pose estimation or, equivalently, a camera localization process. This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedicated to vision-based camera localization along with a survey of several extension proposed in the recent years. For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practical implementations.

  11. Fiducial Marker Detection and Pose Estimation From LIDAR Range Data

    DTIC Science & Technology

    2010-03-01

    the LidarSimulation and a 3D graphical scene in the Delta3D game engine . The PrimitiveBuffer class contains several buffers to temporarily store data...Python bindings for the Delta3D game engine . The simulation produces point cloud data sets according to the actual VLS laser firing parameters to...terrain, measurement of structures, documentation and reverse engineering of public infrastructure, and robotic and autonomous systems for navigation and

  12. Hand Pose Estimation by Fusion of Inertial and Magnetic Sensing Aided by a Permanent Magnet.

    PubMed

    Kortier, Henk G; Antonsson, Jacob; Schepers, H Martin; Gustafsson, Fredrik; Veltink, Peter H

    2015-09-01

    Tracking human body motions using inertial sensors has become a well-accepted method in ambulatory applications since the subject is not confined to a lab-bounded volume. However, a major drawback is the inability to estimate relative body positions over time because inertial sensor information only allows position tracking through strapdown integration, but does not provide any information about relative positions. In addition, strapdown integration inherently results in drift of the estimated position over time. We propose a novel method in which a permanent magnet combined with 3-D magnetometers and 3-D inertial sensors are used to estimate the global trunk orientation and relative pose of the hand with respect to the trunk. An Extended Kalman Filter is presented to fuse estimates obtained from inertial sensors with magnetic updates such that the position and orientation between the human hand and trunk as well as the global trunk orientation can be estimated robustly. This has been demonstrated in multiple experiments in which various hand tasks were performed. The most complex task in which simultaneous movements of both trunk and hand were performed resulted in an average rms position difference with an optical reference system of 19.7±2.2 mm whereas the relative trunk-hand and global trunk orientation error was 2.3±0.9 and 8.6±8.7 deg respectively.

  13. 6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features

    PubMed Central

    Ye, Cang; Hong, Soonhac; Tamjidi, Amirhossein

    2015-01-01

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera’s intensity and range data to estimates the camera’s egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF’s state. The EKF integrates the egomotion into the camera’s pose in the world coordinate system. To retain the EKF’s consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera’s z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and navigational commands to the user through a speech interface. Note to Practitioners This work was motivated by the limitations of the existing navigation technology for the visually impaired. Most of the existing methods use a point/line measurement sensor for indoor object detection. Therefore, they lack capability in detecting 3D objects and positioning a blind traveler. Stereovision has been used in recent research. However, it cannot provide reliable depth data for object detection. Also, it tends to produce a lower localization accuracy because its depth measurement error quadratically increases with the true distance. This paper suggests a new approach for navigating a blind traveler. The method uses a single 3D time

  14. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    SciTech Connect

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysis sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.

  15. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  16. Pose-invariant face-head identification using a bank of neural networks and the 3D neck reference point

    NASA Astrophysics Data System (ADS)

    Hild, Michael; Yoshida, Kazunobu; Hashimoto, Motonobu

    2003-03-01

    A method for recognizing faces in relativley unconstrained environments, such as offices, is described. It can recognize faces occurring over an extended range of orientations and distances relative to the camera. As the pattern recognition mechanism, a bank of small neural networks of the multilayer perceptron type is used, where each perceptron has the task of recognizing only a single person's face. The perceptrons are trained with a set of nine face images representing the nine main facial orientations of the person to be identified, and a set face images from various other persons. The center of the neck is determined as the reference point for face position unification. Geometric normalization and reference point determination utilizes 3-D data point measurements obtained with a stereo camera. The system achieves a recognition rate of about 95%.

  17. Image-based aircraft pose estimation: a comparison of simulations and real-world data

    NASA Astrophysics Data System (ADS)

    Breuers, Marcel G. J.; de Reus, Nico

    2001-10-01

    The problem of estimating aircraft pose information from mono-ocular image data is considered using a Fourier descriptor based algorithm. The dependence of pose estimation accuracy on image resolution and aspect angle is investigated through simulations using sets of synthetic aircraft images. Further evaluation shows that god pose estimation accuracy can be obtained in real world image sequences.

  18. Accuracy and robustness of Kinect pose estimation in the context of coaching of elderly population.

    PubMed

    Obdrzálek, Stepán; Kurillo, Gregorij; Ofli, Ferda; Bajcsy, Ruzena; Seto, Edmund; Jimison, Holly; Pavel, Michael

    2012-01-01

    The Microsoft Kinect camera is becoming increasingly popular in many areas aside from entertainment, including human activity monitoring and rehabilitation. Many people, however, fail to consider the reliability and accuracy of the Kinect human pose estimation when they depend on it as a measuring system. In this paper we compare the Kinect pose estimation (skeletonization) with more established techniques for pose estimation from motion capture data, examining the accuracy of joint localization and robustness of pose estimation with respect to the orientation and occlusions. We have evaluated six physical exercises aimed at coaching of elderly population. Experimental results present pose estimation accuracy rates and corresponding error bounds for the Kinect system.

  19. Integration of a Generalised Building Model Into the Pose Estimation of Uas Images

    NASA Astrophysics Data System (ADS)

    Unger, J.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    A hybrid bundle adjustment is presented that allows for the integration of a generalised building model into the pose estimation of image sequences. These images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between the buildings. The relation between the building model and the images is described by distances between the object coordinates of the tie points and building model planes. Relations are found by a simple 3D distance criterion and are modelled as fictitious observations in a Gauss-Markov adjustment. The coordinates of model vertices are part of the adjustment as directly observed unknowns which allows for changes in the model. Results of first experiments using a synthetic and a real image sequence demonstrate improvements of the image orientation in comparison to an adjustment without the building model, but also reveal limitations of the current state of the method.

  20. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  1. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited VISION-BASED 3D ...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISION-BASED 3D MOTION ESTIMATION FOR ON-ORBIT PROXIMITY SATELLITE TRACKING

  2. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  3. Distributed observers for pose estimation in the presence of inertial sensory soft faults.

    PubMed

    Sadeghzadeh-Nokhodberiz, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

    2014-07-01

    Distributed Particle-Kalman Filter based observers are designed in this paper for inertial sensors (gyroscope and accelerometer) soft faults (biases and drifts) and rigid body pose estimation. The observers fuse inertial sensors with Photogrammetric camera. Linear and angular accelerations as unknown inputs of velocity and attitude rate dynamics, respectively, along with sensory biases and drifts are modeled and augmented to the moving body state parameters. To reduce the complexity of the high dimensional and nonlinear model, the graph theoretic tearing technique (structural decomposition) is employed to decompose the system to smaller observable subsystems. Separate interacting observers are designed for the subsystems which are interacted through well-defined interfaces. Kalman Filters are employed for linear ones and a Modified Particle Filter for a nonlinear non-Gaussian subsystem which includes imperfect attitude rate dynamics is proposed. The main idea behind the proposed Modified Particle Filtering approach is to engage both system and measurement models in the particle generation process. Experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method.

  4. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  5. 3-D, bluff body drag estimation using a Green's function/Gram-Charlier series approach.

    SciTech Connect

    Barone, Matthew Franklin; De Chant, Lawrence Justin

    2004-05-01

    In this study, we describe the extension of the 2-d preliminary design bluff body drag estimation tool developed by De Chant to apply for 3-d flows. As with the 2-d method, the 3-d extension uses a combined approximate Green's function/Gram-Charlier series approach to retain the body geometry information. Whereas, the 2-d methodology relied solely upon the use of small disturbance theory for the inviscid flow field associated with the body of interest to estimate the near-field initial conditions, e.g. velocity defect, the 3-d methodology uses both analytical (where available) and numerical inviscid solutions. The defect solution is then used as an initial condition in an approximate 3-d Green's function solution. Finally, the Green's function solution is matched to the 3-d analog of the classical 2-d Gram-Charlier series and then integrated to yield the net form drag on the bluff body. Preliminary results indicate that drag estimates computed are of accuracy equivalent to the 2-d method for flows with large separation, i.e. less than 20% relative error. As was the lower dimensional method, the 3-d concept is intended to be a supplement to turbulent Navier-Stokes and experimental solution for estimating drag coefficients over blunt bodies.

  6. 3-D, bluff body drag estimation using a Green's function/Gram-Charlier series approach.

    SciTech Connect

    Barone, Matthew Franklin; De Chant, Lawrence Justin

    2005-01-01

    In this study, we describe the extension of the 2-d preliminary design bluff body drag estimation tool developed by De Chant1 to apply for 3-d flows. As with the 2-d method, the 3-d extension uses a combined approximate Green's function/Gram-Charlier series approach to retain the body geometry information. Whereas, the 2-d methodology relied solely upon the use of small disturbance theory for the inviscid flow field associated with the body of interest to estimate the near-field initial conditions, e.g. velocity defect, the 3-d methodology uses both analytical (where available) and numerical inviscid solutions. The defect solution is then used as an initial condition in an approximate 3-d Green's function solution. Finally, the Green's function solution is matched to the 3-d analog of the classical 2-d Gram-Charlier series and then integrated to yield the net form drag on the bluff body. Preliminary results indicate that drag estimates computed are of accuracy equivalent to the 2-d method for flows with large separation, i.e. less than 20% relative error. As was the lower dimensional method, the 3-d concept is intended to be a supplement to turbulent Navier-Stokes and experimental solution for estimating drag coefficients over blunt bodies.

  7. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods

    NASA Astrophysics Data System (ADS)

    van Velden, Floris H. P.; Kloet, Reina W.; van Berckel, Bart N. M.; Wolfensberger, Saskia P. A.; Lammertsma, Adriaan A.; Boellaard, Ronald

    2008-06-01

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  8. Estimation of the degree of polarization in low-light 3D integral imaging

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2016-06-01

    The calculation of the Stokes Parameters and the Degree of Polarization in 3D integral images requires a careful manipulation of the polarimetric elemental images. This fact is particularly important if the scenes are taken in low-light conditions. In this paper, we show that the Degree of Polarization can be effectively estimated even when elemental images are recorded with few photons. The original idea was communicated in [A. Carnicer and B. Javidi, "Polarimetric 3D integral imaging in photon-starved conditions," Opt. Express 23, 6408-6417 (2015)]. First, we use the Maximum Likelihood Estimation approach for generating the 3D integral image. Nevertheless, this method produces very noisy images and thus, the degree of polarization cannot be calculated. We suggest using a Total Variation Denoising filter as a way to improve the quality of the generated 3D images. As a result, noise is suppressed but high frequency information is preserved. Finally, the degree of polarization is obtained successfully.

  9. Recursive estimation of 3D motion and surface structure from local affine flow parameters.

    PubMed

    Calway, Andrew

    2005-04-01

    A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normals in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.

  10. Keyframe selection for robust pose estimation in laparoscopic videos

    NASA Astrophysics Data System (ADS)

    von Öhsen, Udo; Marcinczak, Jan Marek; Mármol Vélez, Andres F.; Grigat, Rolf-Rainer

    2012-02-01

    Motion estimation based on point correspondences in two views is a classic problem in computer vision. In the field of laparoscopic video sequences - even with state of the art algorithms - a stable motion estimation can not be guaranteed generally. Typically, a video from a laparoscopic surgery contains sequences where the surgeon barely moves the endoscope. Such restricted movement causes a small ratio between baseline and distance leading to unstable estimation results. Exploiting the fact that the entire sequence is known a priori, we propose an algorithm for keyframe selection in a sequence of images. The key idea can be expressed as follows: if all combination of frames in a sequence are scored, the optimal solution can be described as a weighted directed graph problem. We adapt the widely known Dijkstras Algorithm to find the best selection of frames.1 The framework for keyframe selection can be used universally to find the best combination of frames for any reliable scoring function. For instance, forward motion ensures the most accurate camera position estimation, whereas sideward motion is preferred in the sense of reconstruction. Based on the distribution and the disparity of point correspondences, we propose a scoring function which is capable of detecting poorly conditioned pairs of frames. We demonstrate the potential of the algorithm focusing on accurate camera positions. A robot system provides ground truth data. The environment in laparoscopic videos is reflected by an industrial endoscope and a phantom.

  11. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  12. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.

    PubMed

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-06-04

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  13. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties.

    PubMed

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B

    2016-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.

  14. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties

    PubMed Central

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B.

    2017-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance—sampled at the frame rate of the cameras—as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented. PMID:28220057

  15. Foot Pose Estimation Using an Inertial Sensor Unit and Two Distance Sensors

    PubMed Central

    Duong, Pham Duy; Suh, Young Soo

    2015-01-01

    There are many inertial sensor-based foot pose estimation algorithms. In this paper, we present a methodology to improve the accuracy of foot pose estimation using two low-cost distance sensors (VL6180) in addition to an inertial sensor unit. The distance sensor is a time-of-flight range finder and can measure distance up to 20 cm. A Kalman filter with 21 states is proposed to estimate both the calibration parameter (relative pose of distance sensors with respect to the inertial sensor unit) and foot pose. Once the calibration parameter is obtained, a Kalman filter with nine states can be used to estimate foot pose. Through four activities (walking, dancing step, ball kicking, jumping), it is shown that the proposed algorithm significantly improves the vertical position estimation. PMID:26151205

  16. Foot Pose Estimation Using an Inertial Sensor Unit and Two Distance Sensors.

    PubMed

    Duong, Pham Duy; Suh, Young Soo

    2015-07-03

    There are many inertial sensor-based foot pose estimation algorithms. In this paper, we present a methodology to improve the accuracy of foot pose estimation using two low-cost distance sensors (VL6180) in addition to an inertial sensor unit. The distance sensor is a time-of-flight range finder and can measure distance up to 20 cm. A Kalman filter with 21 states is proposed to estimate both the calibration parameter (relative pose of distance sensors with respect to the inertial sensor unit) and foot pose. Once the calibration parameter is obtained, a Kalman filter with nine states can be used to estimate foot pose. Through four activities (walking, dancing step, ball kicking, jumping), it is shown that the proposed algorithm significantly improves the vertical position estimation.

  17. Latent-Class Hough Forests for 6 DoF Object Pose Estimation.

    PubMed

    Kouskouridas, Rigas; Tejani, Alykhan; Doumanoglou, Andreas; Tang, Danhang; Kim, Tae-Kyun

    2017-02-07

    In this paper we present Latent-Class Hough Forests, a method for object detection and 6 DoF pose estimation in heavily cluttered and occluded scenarios. We adapt a state of the art template matching feature into a scale-invariant patch descriptor and integrate it into a regression forest using a novel template-based split function. We train with positive samples only and we treat class distributions at the leaf nodes as latent variables. During testing we infer by iteratively updating these distributions, providing accurate estimation of background clutter and foreground occlusions and, thus, better detection rate. Furthermore, as a by-product, our Latent- Class Hough Forests can provide accurate occlusion aware segmentation masks, even in the multi-instance scenario. In addition to an existing public dataset, which contains only single-instance sequences with large amounts of clutter, we have collected two, more challenging, datasets for multiple-instance detection containing heavy 2D and 3D clutter as well as foreground occlusions. We provide extensive experiments on the various parameters of the framework such as patch size, number of trees and number of iterations to infer class distributions at test time. We also evaluate the Latent-Class Hough Forests on all datasets where we outperform state of the art methods.

  18. Pose Estimation with a Kinect for Ergonomic Studies: Evaluation of the Accuracy Using a Virtual Mannequin

    PubMed Central

    Plantard, Pierre; Auvinet, Edouard; Le Pierres, Anne-Sophie; Multon, Franck

    2015-01-01

    Analyzing human poses with a Kinect is a promising method to evaluate potentials risks of musculoskeletal disorders at workstations. In ecological situations, complex 3D poses and constraints imposed by the environment make it difficult to obtain reliable kinematic information. Thus, being able to predict the potential accuracy of the measurement for such complex 3D poses and sensor placements is challenging in classical experimental setups. To tackle this problem, we propose a new evaluation method based on a virtual mannequin. In this study, we apply this method to the evaluation of joint positions (shoulder, elbow, and wrist), joint angles (shoulder and elbow), and the corresponding RULA (a popular ergonomics assessment grid) upper-limb score for a large set of poses and sensor placements. Thanks to this evaluation method, more than 500,000 configurations have been automatically tested, which would be almost impossible to evaluate with classical protocols. The results show that the kinematic information obtained by the Kinect software is generally accurate enough to fill in ergonomic assessment grids. However inaccuracy strongly increases for some specific poses and sensor positions. Using this evaluation method enabled us to report configurations that could lead to these high inaccuracies. As a supplementary material, we provide a software tool to help designers to evaluate the expected accuracy of this sensor for a set of upper-limb configurations. Results obtained with the virtual mannequin are in accordance with those obtained from a real subject for a limited set of poses and sensor placements. PMID:25599426

  19. Building Proteins in a Day: Efficient 3D Molecular Structure Estimation with Electron Cryomicroscopy.

    PubMed

    Punjani, Ali; Brubaker, Marcus A; Fleet, David J

    2017-04-01

    Discovering the 3D atomic-resolution structure of molecules such as proteins and viruses is one of the foremost research problems in biology and medicine. Electron Cryomicroscopy (cryo-EM) is a promising vision-based technique for structure estimation which attempts to reconstruct 3D atomic structures from a large set of 2D transmission electron microscope images. This paper presents a new Bayesian framework for cryo-EM structure estimation that builds on modern stochastic optimization techniques to allow one to scale to very large datasets. We also introduce a novel Monte-Carlo technique that reduces the cost of evaluating the objective function during optimization by over five orders of magnitude. The net result is an approach capable of estimating 3D molecular structure from large-scale datasets in about a day on a single CPU workstation.

  20. Human body 3D posture estimation using significant points and two cameras.

    PubMed

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.

  1. Cervical vertebrae maturation index estimates on cone beam CT: 3D reconstructions vs sagittal sections

    PubMed Central

    Bonfim, Marco A E; Costa, André L F; Ximenez, Michel E L; Cotrim-Ferreira, Flávio A; Ferreira-Santos, Rívea I

    2016-01-01

    Objectives: The aim of this study was to evaluate the performance of CBCT three-dimensional (3D) reconstructions and sagittal sections for estimates of cervical vertebrae maturation index (CVMI). Methods: The sample consisted of 72 CBCT examinations from patients aged 8–16 years (45 females and 27 males) selected from the archives of two private clinics. Two calibrated observers (kappa scores: ≥0.901) interpreted the CBCT settings twice. Intra- and interobserver agreement for both imaging exhibition modes was analyzed by kappa statistics, which was also used to analyze the agreement between 3D reconstructions and sagittal sections. Correlations between cervical vertebrae maturation estimates and chronological age, as well as between the assessments by 3D reconstructions and sagittal sections, were analyzed using gamma Goodman–Kruskal coefficients (α = 0.05). Results: The kappa scores evidenced almost perfect agreement between the first and second assessments of the cervical vertebrae by 3D reconstructions (0.933–0.983) and sagittal sections (0.983–1.000). Similarly, the agreement between 3D reconstructions and sagittal sections was almost perfect (kappa index: 0.983). In most divergent cases, the difference between 3D reconstructions and sagittal sections was one stage of CVMI. Strongly positive correlations (>0.8, p < 0.001) were found not only between chronological age and CVMI but also between the estimates by 3D reconstructions and sagittal sections (p < 0.001). Conclusions: Although CBCT imaging must not be used exclusively for this purpose, it may be suitable for skeletal maturity assessments. PMID:26509559

  2. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  3. On the Estimation of Forest Resources Using 3D Remote Sensing Techniques and Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Karjalainen, Mika; Karila, Kirsi; Liang, Xinlian; Yu, Xiaowei; Huang, Guoman; Lu, Lijun

    2016-08-01

    In recent years, 3D capable remote sensing techniques have shown great potential in forest biomass estimation because of their ability to measure the forest canopy structure, tree height and density. The objective of the Dragon3 forest resources research project (ID 10667) and the supporting ESA young scientist project (ESA contract NO. 4000109483/13/I-BG) was to study the use of satellite based 3D techniques in forest tree height estimation, and consequently in forest biomass and biomass change estimation, by combining satellite data with terrestrial measurements. Results from airborne 3D techniques were also used in the project. Even though, forest tree height can be estimated from 3D satellite SAR data to some extent, there is need for field reference plots. For this reason, we have also been developing automated field plot measurement techniques based on Terrestrial Laser Scanning data, which can be used to train and calibrate satellite based estimation models. In this paper, results of canopy height models created from TerraSAR-X stereo and TanDEM-X INSAR data are shown as well as preliminary results from TLS field plot measurement system. Also, results from the airborne CASMSAR system to measure forest canopy height from P- and X- band INSAR are presented.

  4. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  5. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  6. Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness

    NASA Technical Reports Server (NTRS)

    Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.

    2009-01-01

    Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.

  7. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    NASA Astrophysics Data System (ADS)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  8. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    PubMed Central

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola

    2016-01-01

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333

  9. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.

    PubMed

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola

    2016-02-05

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements.

  10. Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points

    PubMed Central

    Zhang, Zimiao; Zhang, Shihai; Li, Qiu

    2016-01-01

    Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338

  11. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  12. 2-D Versus 3-D Cross-Correlation-Based Radial and Circumferential Strain Estimation Using Multiplane 2-D Ultrafast Ultrasound in a 3-D Atherosclerotic Carotid Artery Model.

    PubMed

    Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L

    2016-10-01

    Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.

  13. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  14. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  15. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  16. Estimating aquatic hazards posed by prescription pharmaceutical residues from municipal wastewater

    EPA Science Inventory

    Risks posed by pharmaceuticals in the environment are hard to estimate due to limited monitoring capacity and difficulty interpreting monitoring results. In order to partially address these issues, we suggest a method for prioritizing pharmaceuticals for monitoring, and a framewo...

  17. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation.

    PubMed

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed; Pihl, Michael Johannes; Hansen, Kristoffer Lindskov; Stuart, Matthias Bo; Thomsen, Carsten; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-03-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented. The aim of this paper is to estimate precise flow rates and peak velocities derived from 3-D vector flow estimates. The emission sequence provides 3-D vector flow estimates at up to 1.145 frames/s in a plane, and was used to estimate 3-D vector flow in a cross-sectional image plane. The method is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared with the expected 79.8 L/min, and to 2.68 ± 0.04 mL/stroke in the pulsating environment compared with the expected 2.57 ± 0.08 mL/stroke. Flow rates estimated in the common carotid artery of a healthy volunteer are compared with magnetic resonance imaging (MRI) measured flow rates using a 1-D through-plane velocity sequence. Mean flow rates were 333 ± 31 mL/min for the presented method and 346 ± 2 mL/min for the MRI measurements.

  18. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems.

    PubMed

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-02-14

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  19. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    PubMed Central

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-01-01

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments. PMID:28216555

  20. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  1. Estimating 3D tilt from local image cues in natural scenes

    PubMed Central

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702

  2. 2D versus 3D cross-correlation-based radial and circumferential strain estimation using multiplane 2D ultrafast ultrasound in a 3D atherosclerotic carotid artery model.

    PubMed

    Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L

    2016-08-25

    Three-dimensional strain estimation might improve the detection and localization of high strain regions in the carotid artery for identification of vulnerable plaques. This study compares 2D vs. 3D displacement estimation in terms of radial and circumferential strain using simulated ultrasound images of a patient specific 3D atherosclerotic carotid artery model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on literature data. A Philips L11-3 linear array transducer was simulated which transmitted plane waves at 3 alternating angles at a pulse repetition rate of 10 kHz. Inter-frame radiofrequency ultrasound data were simulated in Field II for 191 equally spaced longitudinal positions of the internal carotid artery. Accumulated radial and circumferential displacements were estimated using tracking of the inter-frame displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2D and 3D method was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3D displacement estimation for the entire cardiac cycle. The 3D technique clearly outperformed the 2D technique in phases with high inter-frame longitudinal motion. In fact the large inter-frame longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2D technique.

  3. Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis.

    PubMed

    Lu, Feng; Sugano, Yusuke; Okabe, Takahiro; Sato, Yoichi

    2015-11-01

    In this paper, we address the problem of free head motion in appearance-based gaze estimation. This problem remains challenging because head motion changes eye appearance significantly, and thus, training images captured for an original head pose cannot handle test images captured for other head poses. To overcome this difficulty, we propose a novel gaze estimation method that handles free head motion via eye image synthesis based on a single camera. Compared with conventional fixed head pose methods with original training images, our method only captures four additional eye images under four reference head poses, and then, precisely synthesizes new training images for other unseen head poses in estimation. To this end, we propose a single-directional (SD) flow model to efficiently handle eye image variations due to head motion. We show how to estimate SD flows for reference head poses first, and then use them to produce new SD flows for training image synthesis. Finally, with synthetic training images, joint optimization is applied that simultaneously solves an eye image alignment and a gaze estimation. Evaluation of the method was conducted through experiments to assess its performance and demonstrate its effectiveness.

  4. Fine-Scale Population Estimation by 3D Reconstruction of Urban Residential Buildings

    PubMed Central

    Wang, Shixin; Tian, Ye; Zhou, Yi; Liu, Wenliang; Lin, Chenxi

    2016-01-01

    Fine-scale population estimation is essential in emergency response and epidemiological applications as well as urban planning and management. However, representing populations in heterogeneous urban regions with a finer resolution is a challenge. This study aims to obtain fine-scale population distribution based on 3D reconstruction of urban residential buildings with morphological operations using optical high-resolution (HR) images from the Chinese No. 3 Resources Satellite (ZY-3). Specifically, the research area was first divided into three categories when dasymetric mapping was taken into consideration. The results demonstrate that the morphological building index (MBI) yielded better results than built-up presence index (PanTex) in building detection, and the morphological shadow index (MSI) outperformed color invariant indices (CIIT) in shadow extraction and height retrieval. Building extraction and height retrieval were then combined to reconstruct 3D models and to estimate population. Final results show that this approach is effective in fine-scale population estimation, with a mean relative error of 16.46% and an overall Relative Total Absolute Error (RATE) of 0.158. This study gives significant insights into fine-scale population estimation in complicated urban landscapes, when detailed 3D information of buildings is unavailable. PMID:27775670

  5. Estimation of line dimensions in 3D direct laser writing lithography

    NASA Astrophysics Data System (ADS)

    Guney, M. G.; Fedder, G. K.

    2016-10-01

    Two photon polymerization (TPP) based 3D direct laser writing (3D-DLW) finds application in a wide range of research areas ranging from photonic and mechanical metamaterials to micro-devices. Most common structures are either single lines or formed by a set of interconnected lines as in the case of crystals. In order to increase the fidelity of these structures and reach the ultimate resolution, the laser power and scan speed used in the writing process should be chosen carefully. However, the optimization of these writing parameters is an iterative and time consuming process in the absence of a model for the estimation of line dimensions. To this end, we report a semi-empirical analytic model through simulations and fitting, and demonstrate that it can be used for estimating the line dimensions mostly within one standard deviation of the average values over a wide range of laser power and scan speed combinations. The model delimits the trend in onset of micro-explosions in the photoresist due to over-exposure and of low degree of conversion due to under-exposure. The model guides setting of high-fidelity and robust writing parameters of a photonic crystal structure without iteration and in close agreement with the estimated line dimensions. The proposed methodology is generalizable by adapting the model coefficients to any 3D-DLW setup and corresponding photoresist as a means to estimate the line dimensions for tuning the writing parameters.

  6. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  7. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

  8. Strain estimation in 3D by fitting linear and planar data to the March model

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Talbot, Christopher J.

    2016-08-01

    The probability density function associated with the March model is derived and used in a maximum likelihood method to estimate the best fit distribution and 3D strain parameters for a given set of linear or planar data. Typically it is assumed that in the initial state (pre-strain) linear or planar data are uniformly distributed on the sphere which means the number of strain parameters estimated needs to be reduced so that the numerical technique succeeds. Essentially this requires that the data are rotated into a suitable reference frame prior to analysis. The method has been applied to a suitable example from the Dalradian of SW Scotland and results obtained are consistent with those from an independent method of strain analysis. Despite March theory having been incorporated deep into the fabric of geological strain analysis, its full potential as a simple direct 3D strain analytical tool has not been achieved. The method developed here may help remedy this situation.

  9. Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.

    PubMed

    Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L

    2015-03-01

    Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique.

  10. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  11. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  12. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  13. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  14. Detecting and estimating errors in 3D restoration methods using analog models.

    NASA Astrophysics Data System (ADS)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  15. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  16. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  17. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  18. An improved 3-D Look--Locker imaging method for T(1) parameter estimation.

    PubMed

    Nkongchu, Ken; Santyr, Giles

    2005-09-01

    The 3-D Look-Locker (LL) imaging method has been shown to be a highly efficient and accurate method for the volumetric mapping of the spin lattice relaxation time T(1). However, conventional 3-D LL imaging schemes are typically limited to small tip angle RF pulses (estimation. In this work, a more generalized form of the 3-D LL imaging method that incorporates an additional and variable delay time between recovery samples is described, which permits the use of larger tip angles (>5 degrees ), thereby improving the SNR and the accuracy of the method. In phantom studies, a mean T(1) measurement accuracy of less than 2% (0.2-3.1%) using a tip angle of 10 degrees was obtained for a range of T(1) from approximately 300 to 1,700 ms with a measurement time increase of only 15%. This accuracy compares favorably with the conventional 3-D LL method that provided an accuracy between 2.2% and 7.3% using a 5 degrees flip angle.

  19. Robust ego-motion estimation and 3-D model refinement using surface parallax.

    PubMed

    Agrawal, Amit; Chellappa, Rama

    2006-05-01

    We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse depth map using parametric surface parallax models and brightness derivatives extracted from an image pair. Given a coarse depth map acquired by a range-finder or extracted from a digital elevation map (DEM), ego-motion is estimated by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth estimate, motion of the three-dimensional (3-D) points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field, and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. The parallax magnitude is estimated using a constant parallax model (CPM) which assumes a smooth parallax field and a depth based parallax model (DBPM), which models the parallax magnitude using the given depth map. We obtain confidence measures for determining the accuracy of the estimated depth values which are used to remove regions with potentially incorrect depth estimates for robustly estimating ego-motion in subsequent iterations. Experimental results using both synthetic and real data (both indoor and outdoor sequences) illustrate the effectiveness of the proposed algorithm.

  20. Parametric estimation of 3D tubular structures for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Anderson, Pamela G.; Rosenberg, Elizabeth; Kilmer, Misha E.; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L.

    2013-01-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction. PMID:23411913

  1. 3D position estimation using a single coil and two magnetic field sensors.

    PubMed

    Tadayon, P; Staude, G; Felderhoff, T

    2015-01-01

    This paper presents an algorithm which enables the estimation of relative 3D position of a sensor module with two magnetic sensors with respect to a magnetic field source using a single transmitting coil. Starting with the description of the ambiguity problem caused by using a single coil, a system concept comprising two sensors having a fixed spatial relation to each other is introduced which enables the unique determination of the sensors' position in 3D space. For this purpose, an iterative two-step algorithm is presented: In a first step, the data of one sensor is used to limit the number of possible position solutions. In a second step, the spatial relation between the sensors is used to determine the correct sensor position.

  2. Parametric estimation of 3D tubular structures for diffuse optical tomography.

    PubMed

    Larusson, Fridrik; Anderson, Pamela G; Rosenberg, Elizabeth; Kilmer, Misha E; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L

    2013-02-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction.

  3. Pose Estimation of Unmanned Aerial Vehicles Based on a Vision-Aided Multi-Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Abdi, G.; Samadzadegan, F.; Kurz, F.

    2016-06-01

    GNSS/IMU navigation systems offer low-cost and robust solution to navigate UAVs. Since redundant measurements greatly improve the reliability of navigation systems, extensive researches have been made to enhance the efficiency and robustness of GNSS/IMU by additional sensors. This paper presents a method for integrating reference data, images taken from UAVs, barometric height data and GNSS/IMU data to estimate accurate and reliable pose parameters of UAVs. We provide improved pose estimations by integrating multi-sensor observations in an EKF algorithm with IMU motion model. The implemented methodology has demonstrated to be very efficient and reliable for automatic pose estimation. The calculated position and attitude of the UAV especially when we removed the GNSS from the working cycle clearly indicate the ability of the purposed methodology.

  4. 3D Porosity Estimation of the Nankai Trough Sediments from Core-log-seismic Integration

    NASA Astrophysics Data System (ADS)

    Park, J. O.

    2015-12-01

    The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake fault. Historic, great megathrust earthquakes with a recurrence interval of 100-200 yr have generated strong motion and large tsunamis along the Nankai Trough subduction zone. At the Nankai Trough margin, the Philippine Sea Plate (PSP) is being subducted beneath the Eurasian Plate to the northwest at a convergence rate ~4 cm/yr. The Shikoku Basin, the northern part of the PSP, is estimated to have opened between 25 and 15 Ma by backarc spreading of the Izu-Bonin arc. The >100-km-wide Nankai accretionary wedge, which has developed landward of the trench since the Miocene, mainly consists of offscraped and underplated materials from the trough-fill turbidites and the Shikoku Basin hemipelagic sediments. Particularly, physical properties of the incoming hemipelagic sediments may be critical for seismogenic behavior of the megathrust fault. We have carried out core-log-seismic integration (CLSI) to estimate 3D acoustic impedance and porosity for the incoming sediments in the Nankai Trough. For the CLSI, we used 3D seismic reflection data, P-wave velocity and density data obtained during IODP (Integrated Ocean Drilling Program) Expeditions 322 and 333. We computed acoustic impedance depth profiles for the IODP drilling sites from P-wave velocity and density data. We constructed seismic convolution models with the acoustic impedance profiles and a source wavelet which is extracted from the seismic data, adjusting the seismic models to observed seismic traces with inversion method. As a result, we obtained 3D acoustic impedance volume and then converted it to 3D porosity volume. In general, the 3D porosities show decrease with depth. We found a porosity anomaly zone with alteration of high and low porosities seaward of the trough axis. In this talk, we will show detailed 3D porosity of the incoming sediments, and present implications of the porosity anomaly zone for the

  5. Estimability of thrusting trajectories in 3-D from a single passive sensor with unknown launch point

    NASA Astrophysics Data System (ADS)

    Yuan, Ting; Bar-Shalom, Yaakov; Willett, Peter; Ben-Dov, R.; Pollak, S.

    2013-09-01

    The problem of estimating the state of thrusting/ballistic endoatmospheric projectiles moving in 3-dimensional (3-D) space using 2-dimensional (2-D) measurements from a single passive sensor is investigated. The location of projectile's launch point (LP) is unavailable and this could significantly affect the performance of the estimation and the IPP. The LP altitude is then an unknown target parameter. The estimability is analyzed based on the Fisher Information Matrix (FIM) of the target parameter vector, comprising the initial launch (azimuth and elevation) angles, drag coefficient, thrust and the LP altitude, which determine the trajectory according to a nonlinear motion equation. The full rank of the FIM ensures that one has an estimable target parameters. The corresponding Craḿer-Rao lower bound (CRLB) quantifies the estimation performance of the estimator that is statistically efficient and can be used for IPP. In view of the inherent nonlinearity of the problem, the maximum likelihood (ML) estimate of the target parameter vector is found by using a mixed (partially grid-based) search approach. For a selected grid in the drag-coefficient-thrust-altitude subspace, the proposed parallelizable approach is shown to have reliable estimation performance and further leads to the final IPP of high accuracy.

  6. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  7. Spatiotemporal non-rigid image registration for 3D ultrasound cardiac motion estimation

    NASA Astrophysics Data System (ADS)

    Loeckx, D.; Ector, J.; Maes, F.; D'hooge, J.; Vandermeulen, D.; Voigt, J.-U.; Heidbüchel, H.; Suetens, P.

    2007-03-01

    We present a new method to evaluate 4D (3D + time) cardiac ultrasound data sets by nonrigid spatio-temporal image registration. First, a frame-to-frame registration is performed that yields a dense deformation field. The deformation field is used to calculate local spatiotemporal properties of the myocardium, such as the velocity, strain and strain rate. The field is also used to propagate particular points and surfaces, representing e.g. the endo-cardial surface over the different frames. As such, the 4D path of these point is obtained, which can be used to calculate the velocity by which the wall moves and the evolution of the local surface area over time. The wall velocity is not angle-dependent as in classical Doppler imaging, since the 4D data allows calculating the true 3D motion. Similarly, all 3D myocardium strain components can be estimated. Combined they result in local surface area or volume changes which van be color-coded as a measure of local contractability. A diagnostic method that strongly benefits from this technique is cardiac motion and deformation analysis, which is an important aid to quantify the mechanical properties of the myocardium.

  8. Twin-beam real-time position estimation of micro-objects in 3D

    NASA Astrophysics Data System (ADS)

    Gurtner, Martin; Zemánek, Jiří

    2016-12-01

    Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, the majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D1; the method is based on twin-beam illumination and requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. The performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as feedback for the controller. The experiments show that the estimate is accurate to within  ∼3 μm in the lateral position and  ∼7 μm in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.

  9. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  10. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  11. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan

    2016-04-01

    Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.

  12. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  13. Monocular Vision- and IMU-Based System for Prosthesis Pose Estimation During Total Hip Replacement Surgery.

    PubMed

    Su, Shaojie; Zhou, Yixin; Wang, Zhihua; Chen, Hong

    2017-03-30

    The average age of population increases worldwide, so does the number of total hip replacement surgeries. Total hip replacement, however, often involves a risk of dislocation and prosthetic impingement. To minimize the risk after surgery, we propose an instrumented hip prosthesis that estimates the relative pose between prostheses intraoperatively and ensures the placement of prostheses within a safe zone. We create a model of the hip prosthesis as a ball and socket joint, which has four degrees of freedom (DOFs), including 3-DOF rotation and 1-DOF translation. We mount a camera and an inertial measurement unit (IMU) inside the hollow ball, or "femoral head prosthesis," while printing customized patterns on the internal surface of the socket, or "acetabular cup." Since the sensors were rigidly fixed to the femoral head prosthesis, measuring its motions poses a sensor ego-motion estimation problem. By matching feature points in images of the reference patterns, we propose a monocular vision based method with a relative error of less than 7% in the 3-DOF rotation and 8% in the 1-DOF translation. Further, to reduce system power consumption, we apply the IMU with its data fused by an extended Kalman filter to replace the camera in the 3-DOF rotation estimation, which yields a less than 4.8% relative error and a 21.6% decrease in power consumption. Experimental results show that the best approach to prosthesis pose estimation is a combination of monocular vision-based translation estimation and IMU-based rotation estimation, and we have verified the feasibility and validity of this system in prosthesis pose estimation.

  14. Integrated contour detection and pose estimation for fluoroscopic analysis of knee implants.

    PubMed

    Prins, A H; Kaptein, B L; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Valstar, E R

    2011-08-01

    With fluoroscopic analysis of knee implant kinematics the implant contour must be detected in each image frame, followed by estimation of the implant pose. With a large number of possibly low-quality images, the contour detection is a time-consuming bottleneck. The present paper proposes an automated contour detection method, which is integrated in the pose estimation. In a phantom experiment the automated method was compared with a standard method, which uses manual selection of correct contour parts. Both methods demonstrated comparable precision, with a minor difference in the Y-position (0.08 mm versus 0.06 mm). The precision of each method was so small (below 0.2 mm and 0.3 degrees) that both are sufficiently accurate for clinical research purposes. The efficiency of both methods was assessed on six clinical datasets. With the automated method the observer spent 1.5 min per image, significantly less than 3.9 min with the standard method. A Bland-Altman analysis between the methods demonstrated no discernible trends in the relative femoral poses. The threefold increase in efficiency demonstrates that a pose estimation approach with integrated contour detection is more intuitive than a standard method. It eliminates most of the manual work in fluoroscopic analysis, with sufficient precision for clinical research purposes.

  15. Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions

    NASA Astrophysics Data System (ADS)

    Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios

    2017-03-01

    Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose tu use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available datasets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.

  16. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  17. Estimation of foot pressure from human footprint depths using 3D scanner

    NASA Astrophysics Data System (ADS)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  18. Multiview diffeomorphic registration: application to motion and strain estimation from 3D echocardiography.

    PubMed

    Piella, Gemma; De Craene, Mathieu; Butakoff, Constantine; Grau, Vicente; Yao, Cheng; Nedjati-Gilani, Shahrum; Penney, Graeme P; Frangi, Alejandro F

    2013-04-01

    This paper presents a new registration framework for quantifying myocardial motion and strain from the combination of multiple 3D ultrasound (US) sequences. The originality of our approach lies in the estimation of the transformation directly from the input multiple views rather than from a single view or a reconstructed compounded sequence. This allows us to exploit all spatiotemporal information available in the input views avoiding occlusions and image fusion errors that could lead to some inconsistencies in the motion quantification result. We propose a multiview diffeomorphic registration strategy that enforces smoothness and consistency in the spatiotemporal domain by modeling the 4D velocity field continuously in space and time. This 4D continuous representation considers 3D US sequences as a whole, therefore allowing to robustly cope with variations in heart rate resulting in different number of images acquired per cardiac cycle for different views. This contributes to the robustness gained by solving for a single transformation from all input sequences. The similarity metric takes into account the physics of US images and uses a weighting scheme to balance the contribution of the different views. It includes a comparison both between consecutive images and between a reference and each of the following images. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement fields. Registration and strain accuracy were evaluated on synthetic 3D US sequences with known ground truth. Experiments were also conducted on multiview 3D datasets of 8 volunteers and 1 patient treated by cardiac resynchronization therapy. Strain curves obtained from our multiview approach were compared to the single-view case, as well as with other multiview approaches. For healthy cases, the inclusion of several views improved the consistency of the strain curves and reduced the number of segments where a non-physiological strain pattern was

  19. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  20. 3D viscosity maps for Greenland and effect on GRACE mass balance estimates

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Xu, Zheng

    2016-04-01

    The GRACE satellite mission measures mass loss of the Greenland ice sheet. To correct for glacial isostatic adjustment numerical models are used. Although generally found to be a small signal, the full range of possible GIA models has not been explored yet. In particular, low viscosities due to a wet mantle and high temperatures due to the nearby Iceland hotspot could have a significant effect on GIA gravity rates. The goal of this study is to present a range of possible viscosity maps, and investigate the effect on GRACE mass balance estimates. Viscosity is derived using flow laws for olivine. Mantle temperature is computed from global seismology models, based on temperature derivatives for different mantle compositions. An indication for grain sizes is obtained by xenolith findings at a few locations. We also investigate the weakening effect of the presence of melt. To calculate gravity rates, we use a finite-element GIA model with the 3D viscosity maps and the ICE-5G loading history. GRACE mass balances for mascons in Greenland are derived with a least-squares inversion, using separate constraints for the inland and coastal areas in Greenland. Biases in the least-squares inversion are corrected using scale factors estimated from a simulation based on a surface mass balance model (Xu et al., submitted to The Cryosphere). Model results show enhanced gravity rates in the west and south of Greenland with 3D viscosity maps, compared to GIA models with 1D viscosity. The effect on regional mass balance is up to 5 Gt/year. Regional low viscosity can make present-day gravity rates sensitivity to ice thickness changes in the last decades. Therefore, an improved ice loading history for these time scales is needed.

  1. Model-based Object Localization and Pose Estimation Method Robust Against the Stereo Miscorrespondence

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaharu; Tomita, Fumiaki; Maruyama, Kenichi; Kawai, Yoshihiro; Fujimura, Kouta

    The miscorrespondence in stereo image analysis, which is caused by occlusion among images with failure in edge detection, often occurs in real factory environment, and this seriously disturbs the object localization and pose estimation. This work shows that, even under such conditions, the location and attitude of target object can precisely be measured, based on the three base-line trinocular stereo image analysis, using a “model-based verification” method, i.e., a model-based object recognition method including a multi-modal optimization algorithm. This method is suitable for real applications which need object localization and pose estimation, like a bin picking of parts randomly placed on factory automation.

  2. Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.

    PubMed

    Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-12-01

    We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework.

  3. Real-Time Head Pose Estimation Using a WEBCAM: Monocular Adaptive View-Based Appearance Model

    DTIC Science & Technology

    2008-12-01

    Huang (2006). Graph embedded analysis for head pose estimation. In Proc. IEEE Intl. Conf. Automatic Face and Gesture Recognition , pp. 3–8. Fu, Y. and T...of human- computer and human-robot interaction. Possible appli- cations include novel computer input devices (Fu and Huang, 2007), head gesture ... recognition , driver fatigue recognition systems (Baker et al., 2004), attention aware- ness for intelligent tutoring systems, and social interac- tion

  4. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  5. Estimating Vehicle Pose Relative to Current Lane from Fisheye Camera System

    NASA Astrophysics Data System (ADS)

    Li, Shigang; Oshima, Hideki; Nakanishi, Isao

    A fisheye camera system is usually used for eliminating the blind spot around a vehicle. In this paper we propose a method of estimating vehicle pose relative to current lane from the side fisheye cameras of such a fisheye camera system. The side fisheye camera with hemispherical field of view can observe the side boundary of the vehicle and the lane markings simultaneously. An algorithm of estimating the distance and the relative orientation between the vehicle and the current lane is presented based on the side boundary of the vehicle and the nearest lane marking. The experimental results are also presented to show the effectiveness of the proposed method.

  6. Pose and Motion Estimation Using Dual Quaternion-Based Extended Kalman Filtering

    SciTech Connect

    Goddard, J.S.; Abidi, M.A.

    1998-06-01

    A solution to the remote three-dimensional (3-D) measurement problem is presented for a dynamic system given a sequence of two-dimensional (2-D) intensity images of a moving object. The 3-D transformation is modeled as a nonlinear stochastic system with the state estimate providing the six-degree-of-freedom motion and position values as well as structure. The stochastic model uses the iterated extended Kalman filter (IEKF) as a nonlinear estimator and a screw representation of the 3-D transformation based on dual quaternions. Dual quaternions, whose elements are dual numbers, provide a means to represent both rotation and translation in a unified notation. Linear object features, represented as dual vectors, are transformed using the dual quaternion transformation and are then projected to linear features in the image plane. The method has been implemented and tested with both simulated and actual experimental data. Simulation results are provided, along with comparisons to a point-based IEKF method using rotation and translation, to show the relative advantages of this method. Experimental results from testing using a camera mounted on the end effector of a robot arm are also given.

  7. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    reliable results and resolution. Based on the sediment layers of the peat bog together with the generated 3D surface model the paleoenvironment, the largest paleowater level can be reconstructed and we can estimate the dimension of the landslide which created the basin of the peat bog.

  8. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands

    PubMed Central

    Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region’s population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  9. Estimating 3D movements from 2D observations using a continuous model of helical swimming.

    PubMed

    Gurarie, Eliezer; Grünbaum, Daniel; Nishizaki, Michael T

    2011-06-01

    Helical swimming is among the most common movement behaviors in a wide range of microorganisms, and these movements have direct impacts on distributions, aggregations, encounter rates with prey, and many other fundamental ecological processes. Microscopy and video technology enable the automated acquisition of large amounts of tracking data; however, these data are typically two-dimensional. The difficulty of quantifying the third movement component complicates understanding of the biomechanical causes and ecological consequences of helical swimming. We present a versatile continuous stochastic model-the correlated velocity helical movement (CVHM) model-that characterizes helical swimming with intrinsic randomness and autocorrelation. The model separates an organism's instantaneous velocity into a slowly varying advective component and a perpendicularly oriented rotation, with velocities, magnitude of stochasticity, and autocorrelation scales defined for both components. All but one of the parameters of the 3D model can be estimated directly from a two-dimensional projection of helical movement with no numerical fitting, making it computationally very efficient. As a case study, we estimate swimming parameters from videotaped trajectories of a toxic unicellular alga, Heterosigma akashiwo (Raphidophyceae). The algae were reared from five strains originally collected from locations in the Atlantic and Pacific Oceans, where they have caused Harmful Algal Blooms (HABs). We use the CVHM model to quantify cell-level and strain-level differences in all movement parameters, demonstrating the utility of the model for identifying strains that are difficult to distinguish by other means.

  10. 3D Model Uncertainty in Estimating the Inner Edge of the Habitable Zone

    NASA Astrophysics Data System (ADS)

    Abbot, D. S.; Yang, J.; Wolf, E. T.; Leconte, J.; Merlis, T. M.; Koll, D. D. B.; Goldblatt, C.; Ding, F.; Forget, F.; Toon, B.

    2015-12-01

    Accurate estimates of the width of the habitable zone are critical for determining which exoplanets are potentially habitable and estimating the frequency of Earth-like planets in the galaxy. Recently, the inner edge of the habitable zone has been calculated using 3D atmospheric general circulation models (GCMs) that include the effects of subsaturation and clouds, but different models obtain different results. We study potential sources of differences in five GCMs through a series of comparisons of radiative transfer, clouds, and dynamical cores for a rapidly rotating planet around the Sun and a synchronously rotating planet around an M star. We find that: (1) Cloud parameterization leads to the largest differences among the models; (2) Differences in water vapor longwave radiative transfer are moderate as long as the surface temperature is lower than 360 K; (3) Differences in shortwave absorption influences atmospheric humidity of synchronously rotating planet through a positive feedback; (4) Differences in atmospheric dynamical core have a very small effect on the surface temperature; and (5) Rayleigh scattering leads to very small differences among models. These comparisons suggest that future model development should focus on clouds and water vapor radiative transfer.

  11. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    SciTech Connect

    Margraf, J. D.; Dunn, T. A.

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  12. 3D pre- versus post-season comparisons of surface and relative pose of the corpus callosum in contact sport athletes

    NASA Astrophysics Data System (ADS)

    Lao, Yi; Gajawelli, Niharika; Haas, Lauren; Wilkins, Bryce; Hwang, Darryl; Tsao, Sinchai; Wang, Yalin; Law, Meng; Leporé, Natasha

    2014-03-01

    Mild traumatic brain injury (MTBI) or concussive injury affects 1.7 million Americans annually, of which 300,000 are due to recreational activities and contact sports, such as football, rugby, and boxing[1]. Finding the neuroanatomical correlates of brain TBI non-invasively and precisely is crucial for diagnosis and prognosis. Several studies have shown the in influence of traumatic brain injury (TBI) on the integrity of brain WM [2-4]. The vast majority of these works focus on athletes with diagnosed concussions. However, in contact sports, athletes are subjected to repeated hits to the head throughout the season, and we hypothesize that these have an influence on white matter integrity. In particular, the corpus callosum (CC), as a small structure connecting the brain hemispheres, may be particularly affected by torques generated by collisions, even in the absence of full blown concussions. Here, we use a combined surface-based morphometry and relative pose analyses, applying on the point distribution model (PDM) of the CC, to investigate TBI related brain structural changes between 9 pre-season and 9 post-season contact sport athlete MRIs. All the data are fed into surface based morphometry analysis and relative pose analysis. The former looks at surface area and thickness changes between the two groups, while the latter consists of detecting the relative translation, rotation and scale between them.

  13. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and

  14. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  15. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-31

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm 2.5 cm 2.5 cm]).

  16. Angle estimation of simultaneous orthogonal rotations from 3D gyroscope measurements.

    PubMed

    Stančin, Sara; Tomažič, Sašo

    2011-01-01

    A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation.

  17. 3D beam shape estimation based on distributed coaxial cable interferometric sensor

    NASA Astrophysics Data System (ADS)

    Cheng, Baokai; Zhu, Wenge; Liu, Jie; Yuan, Lei; Xiao, Hai

    2017-03-01

    We present a coaxial cable interferometer based distributed sensing system for 3D beam shape estimation. By making a series of reflectors on a coaxial cable, multiple Fabry–Perot cavities are created on it. Two cables are mounted on the beam at proper locations, and a vector network analyzer (VNA) is connected to them to obtain the complex reflection signal, which is used to calculate the strain distribution of the beam in horizontal and vertical planes. With 6 GHz swept bandwidth on the VNA, the spatial resolution for distributed strain measurement is 0.1 m, and the sensitivity is 3.768 MHz mε ‑1 at the interferogram dip near 3.3 GHz. Using displacement-strain transformation, the shape of the beam is reconstructed. With only two modified cables and a VNA, this system is easy to implement and manage. Comparing to optical fiber based sensor systems, the coaxial cable sensors have the advantage of large strain and robustness, making this system suitable for structure health monitoring applications.

  18. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators

    PubMed Central

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-01-01

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented. PMID:26703603

  19. Correlation techniques as applied to pose estimation in space station docking

    NASA Astrophysics Data System (ADS)

    Rollins, John M.; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-08-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed.

  20. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  1. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  2. Comparison of different methods for gender estimation from face image of various poses

    NASA Astrophysics Data System (ADS)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  3. An Inertial and Optical Sensor Fusion Approach for Six Degree-of-Freedom Pose Estimation.

    PubMed

    He, Changyu; Kazanzides, Peter; Sen, Hasan Tutkun; Kim, Sungmin; Liu, Yue

    2015-07-08

    Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions.

  4. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  5. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies.

    PubMed

    Wake, Kanako; Varsier, Nadège; Watanabe, Soichi; Taki, Masao; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-10-07

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  6. Estimation of uncertainties in geological 3D raster layer models as integral part of modelling procedures

    NASA Astrophysics Data System (ADS)

    Maljers, Denise; den Dulk, Maryke; ten Veen, Johan; Hummelman, Jan; Gunnink, Jan; van Gessel, Serge

    2016-04-01

    The Geological Survey of the Netherlands (GSN) develops and maintains subsurface models with regional to national coverage. These models are paramount for petroleum exploration in conventional reservoirs, for understanding the distribution of unconventional reservoirs, for mapping geothermal aquifers, for the potential to store carbon, or for groundwater- or aggregate resources. Depending on the application domain these models differ in depth range, scale, data used, modelling software and modelling technique. Depth uncertainty information is available for the Geological Survey's 3D raster layer models DGM Deep and DGM Shallow. These models cover different depth intervals and are constructed using different data types and different modelling software. Quantifying the uncertainty of geological models that are constructed using multiple data types as well as geological expert-knowledge is not straightforward. Examples of geological expert-knowledge are trend surfaces displaying the regional thickness trends of basin fills or steering points that are used to guide the pinching out of geological formations or the modelling of the complex stratal geometries associated with saltdomes and saltridges. This added a-priori knowledge, combined with the assumptions underlying kriging (normality and second-order stationarity), makes the kriging standard error an incorrect measure of uncertainty for our geological models. Therefore the methods described below were developed. For the DGM Deep model a workflow has been developed to assess uncertainty by combining precision (giving information on the reproducibility of the model results) and accuracy (reflecting the proximity of estimates to the true value). This was achieved by centering the resulting standard deviations around well-tied depths surfaces. The standard deviations are subsequently modified by three other possible error sources: data error, structural complexity and velocity model error. The uncertainty workflow

  7. A new estimate of the global 3D geostrophic ocean circulation based on satellite data and in-situ measurements

    NASA Astrophysics Data System (ADS)

    Mulet, S.; Rio, M.-H.; Mignot, A.; Guinehut, S.; Morrow, R.

    2012-11-01

    A new estimate of the Global Ocean 3D geostrophic circulation from the surface down to 1500 m depth (Surcouf3D) has been computed for the 1993-2008 period using an observation-based approach that combines altimetry with temperature and salinity through the thermal wind equation. The validity of this simple approach was tested using a consistent dataset from a model reanalysis. Away from the boundary layers, errors are less than 10% in most places, which indicate that the thermal wind equation is a robust approximation to reconstruct the 3D oceanic circulation in the ocean interior. The Surcouf3D current field was validated in the Atlantic Ocean against in-situ observations. We considered the ANDRO current velocities deduced at 1000 m depth from Argo float displacements as well as velocity measurements at 26.5°N from the RAPID-MOCHA current meter array. The Surcouf3D currents show similar skill to the 3D velocities from the GLORYS Mercator Ocean reanalysis in reproducing the amplitude and variability of the ANDRO currents. In the upper 1000 m, high correlations are also found with in-situ velocities measured by the RAPID-MOCHA current meters. The Surcouf3D current field was then used to compute estimates of the Atlantic Meridional Overturning Circulation (AMOC) through the 25°N section, showing good comparisons with hydrographic sections from 1998 and 2004. Monthly averaged AMOC time series are also consistent with the RAPID-MOCHA array and with the GLORYS Mercator Ocean reanalysis over the April 2004-September 2007 period. Finally a 15 years long time series of monthly estimates of the AMOC was computed. The AMOC strength has a mean value of 16 Sv with an annual (resp. monthly) standard deviation of 2.4 Sv (resp. 7.1 Sv) over the 1993-2008 period. The time series, characterized by a strong variability, shows no significant trend.

  8. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  9. Improved 3D Look-Locker Acquisition Scheme and Angle Map Filtering Procedure for T1 Estimation

    PubMed Central

    Hui, CheukKai; Esparza-Coss, Emilio; Narayana, Ponnada A

    2013-01-01

    The 3D Look-Locker (LL) acquisition is a widely used fast and efficient T1 mapping method. However, the multi-shot approach of 3D LL acquisition can introduce reconstruction artifacts that result in intensity distortions. Traditional 3D LL acquisition generally utilizes centric encoding scheme that is limited to a single phase encoding direction in the k-space. To optimize the k-space segmentation, an elliptical scheme with two phase encoding directions is implemented for the LL acquisition. This elliptical segmentation can reduce the intensity errors in the reconstructed images and improve the final T1 estimation. One of the major sources of error in LL based T1 estimation is lack of accurate knowledge of the actual flip angle. Multi-parameter curve fitting procedure can account for some of the variability in the flip angle. However, curve fitting can also introduce errors in the estimated flip angle that can result in incorrect T1 values. A filtering procedure based on goodness of fit (GOF) is proposed to reduce the effect of false flip angle estimates. Filtering based on the GOF weighting can remove likely incorrect angles that result in bad curve fit. Simulation, phantom, and in-vivo studies have demonstrated that these techniques can improve the accuracy of 3D LL T1 estimation. PMID:23784967

  10. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  11. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  12. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  13. Rapid, Robust, Optimal Pose Estimation from a Single Affine Image (PREPRINT)

    DTIC Science & Technology

    2006-11-01

    the new method (0), and their reconstruction using full 3-d sensed data ( pentagram ). 3-d measurements. This method requires another sensor such as a...using (13). Since this method has access to the full 3-d data, its reconstructed image points ( pentagrams ) closely match the given image points (+). A

  14. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  15. Head Pose Estimation on Top of Haar-Like Face Detection: A Study Using the Kinect Sensor

    PubMed Central

    Saeed, Anwar; Al-Hamadi, Ayoub; Ghoneim, Ahmed

    2015-01-01

    Head pose estimation is a crucial initial task for human face analysis, which is employed in several computer vision systems, such as: facial expression recognition, head gesture recognition, yawn detection, etc. In this work, we propose a frame-based approach to estimate the head pose on top of the Viola and Jones (VJ) Haar-like face detector. Several appearance and depth-based feature types are employed for the pose estimation, where comparisons between them in terms of accuracy and speed are presented. It is clearly shown through this work that using the depth data, we improve the accuracy of the head pose estimation. Additionally, we can spot positive detections, faces in profile views detected by the frontal model, that are wrongly cropped due to background disturbances. We introduce a new depth-based feature descriptor that provides competitive estimation results with a lower computation time. Evaluation on a benchmark Kinect database shows that the histogram of oriented gradients and the developed depth-based features are more distinctive for the head pose estimation, where they compare favorably to the current state-of-the-art approaches. Using a concatenation of the aforementioned feature types, we achieved a head pose estimation with average errors not exceeding 5.1∘,4.6∘,4.2∘ for pitch, yaw and roll angles, respectively. PMID:26343651

  16. Head Pose Estimation on Top of Haar-Like Face Detection: A Study Using the Kinect Sensor.

    PubMed

    Saeed, Anwar; Al-Hamadi, Ayoub; Ghoneim, Ahmed

    2015-08-26

    Head pose estimation is a crucial initial task for human face analysis, which is employed in several computer vision systems, such as: facial expression recognition, head gesture recognition, yawn detection, etc. In this work, we propose a frame-based approach to estimate the head pose on top of the Viola and Jones (VJ) Haar-like face detector. Several appearance and depth-based feature types are employed for the pose estimation, where comparisons between them in terms of accuracy and speed are presented. It is clearly shown through this work that using the depth data, we improve the accuracy of the head pose estimation. Additionally, we can spot positive detections, faces in profile views detected by the frontal model, that are wrongly cropped due to background disturbances. We introduce a new depth-based feature descriptor that provides competitive estimation results with a lower computation time. Evaluation on a benchmark Kinect database shows that the histogram of oriented gradients and the developed depth-based features are more distinctive for the head pose estimation, where they compare favorably to the current state-of-the-art approaches. Using a concatenation of the aforementioned feature types, we achieved a head pose estimation with average errors not exceeding 5:1; 4:6; 4:2 for pitch, yaw and roll angles, respectively.

  17. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  18. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.

  19. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    5 2.2 Photogrammetry ...focus on particle filters. 2.2 Photogrammetry Photogrammetry is the process of determining 3-D coordinates through images. The mathematical underpinnings...of photogrammetry are rooted in the 1480s with Leonardo da Vinci’s study of perspectives [8, p. 1]. However, digital photogrammetry did not emerge

  20. 3D ultrasound estimation of the effective volume for popliteal block at the level of division.

    PubMed

    Sala-Blanch, X; Franco, J; Bergé, R; Marín, R; López, A M; Agustí, M

    2017-03-01

    Local anaesthetic injection between the tibial and commmon peroneal nerves within connective tissue sheath results in a predictable diffusion and allows for a reduction in the volume needed to achieve a consistent sciatic popliteal block. Using 3D ultrasound volumetric acquisition, we quantified the visible volume in contact with the nerve along a 5cm segment.

  1. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging

    PubMed Central

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  2. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  3. Calibration Method for ML Estimation of 3D Interaction Position in a Thick Gamma-Ray Detector

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    High-energy (> 100 keV) photon detectors are often made thick relative to their lateral resolution in order to improve their photon-detection efficiency. To avoid issues of parallax and increased signal variance that result from random interaction depth, we must determine the 3D interaction position in the imaging detector. With this goal in mind, we examine a method of calibrating response statistics of a thick-detector gamma camera to produce a maximum-likelihood estimate of 3D interaction position. We parameterize the mean detector response as a function of 3D position, and we estimate these parameters by maximizing their likelihood given prior knowledge of the pathlength distribution and a complete list of camera signals for an ensemble of gamma-ray interactions. Furthermore, we describe an iterative method for removing multiple-interaction events from our calibration data and for refining our calibration of the mean detector response to single interactions. We demonstrate this calibration method with simulated gamma-camera data. We then show that the resulting calibration is accurate and can be used to produce unbiased estimates of 3D interaction position. PMID:20191099

  4. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  5. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    NASA Astrophysics Data System (ADS)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  6. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing.

  7. Atmospheric Nitrogen Trifluoride: Optimized emission estimates using 2-D and 3-D Chemical Transport Models from 1973-2008

    NASA Astrophysics Data System (ADS)

    Ivy, D. J.; Rigby, M. L.; Prinn, R. G.; Muhle, J.; Weiss, R. F.

    2009-12-01

    We present optimized annual global emissions from 1973-2008 of nitrogen trifluoride (NF3), a powerful greenhouse gas which is not currently regulated by the Kyoto Protocol. In the past few decades, NF3 production has dramatically increased due to its usage in the semiconductor industry. Emissions were estimated through the 'pulse-method' discrete Kalman filter using both a simple, flexible 2-D 12-box model used in the Advanced Global Atmospheric Gases Experiment (AGAGE) network and the Model for Ozone and Related Tracers (MOZART v4.5), a full 3-D atmospheric chemistry model. No official audited reports of industrial NF3 emissions are available, and with limited information on production, a priori emissions were estimated using both a bottom-up and top-down approach with two different spatial patterns based on semiconductor perfluorocarbon (PFC) emissions from the Emission Database for Global Atmospheric Research (EDGAR v3.2) and Semiconductor Industry Association sales information. Both spatial patterns used in the models gave consistent results, showing the robustness of the estimated global emissions. Differences between estimates using the 2-D and 3-D models can be attributed to transport rates and resolution differences. Additionally, new NF3 industry production and market information is presented. Emission estimates from both the 2-D and 3-D models suggest that either the assumed industry release rate of NF3 or industry production information is still underestimated.

  8. Estimating 3D gaze in physical environment: a geometric approach on consumer-level remote eye tracker

    NASA Astrophysics Data System (ADS)

    Wibirama, Sunu; Mahesa, Rizki R.; Nugroho, Hanung A.; Hamamoto, Kazuhiko

    2017-02-01

    Remote eye trackers with consumer price have been used for various applications on flat computer screen. On the other hand, 3D gaze tracking in physical environment has been useful for visualizing gaze behavior, robots controller, and assistive technology. Instead of using affordable remote eye trackers, 3D gaze tracking in physical environment has been performed using corporate-level head mounted eye trackers, limiting its practical usage to niche user. In this research, we propose a novel method to estimate 3D gaze using consumer-level remote eye tracker. We implement geometric approach to obtain 3D point of gaze from binocular lines-of-sight. Experimental results show that the proposed method yielded low errors of 3.47+/-3.02 cm, 3.02+/-1.34 cm, and 2.57+/-1.85 cm in X, Y , and Z dimensions, respectively. The proposed approach may be used as a starting point for designing interaction method in 3D physical environment.

  9. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  10. Transient Hydraulic Tomography in the Field: 3-D K Estimation and Validation in a Highly Heterogeneous Unconfined Aquifer

    NASA Astrophysics Data System (ADS)

    Hochstetler, D. L.; Barrash, W.; Kitanidis, P. K.

    2014-12-01

    Characterizing subsurface hydraulic properties is essential for predicting flow and transport, and thus, for making informed decisions, such as selection and execution of a groundwater remediation strategy; however, obtaining accurate estimates at the necessary resolution with quantified uncertainty is an ongoing challenge. For over a decade, the development of hydraulic tomography (HT) - i.e., conducting a series of discrete interval hydraulic tests, observing distributed pressure signals, and analyzing the data through inversion of all tests together - has shown promise as a subsurface imaging method. Numerical and laboratory 3-D HT studies have enhanced and validated such methodologies, but there have been far fewer 3-D field characterization studies. We use 3-D transient hydraulic tomography (3-D THT) to characterize a highly heterogeneous unconfined alluvial aquifer at an active industrial site near Assemini, Italy. With 26 pumping tests conducted from 13 isolated vertical locations, and pressure responses measured at 63 spatial locations through five clusters of continuous multichannel tubing, we recorded over 800 drawdown curves during the field testing. Selected measurements from each curve were inverted in order to obtain an estimate of the distributed hydraulic conductivity field K(x) as well as uniform ("effective") values of specific storage Ss and specific yield Sy. The estimated K values varied across seven orders of magnitude, suggesting that this is one of the most heterogeneous sites at which HT has ever been conducted. Furthermore, these results are validated using drawdown observations from seven independent tests with pumping performed at multiple locations other than the main pumping well. The validation results are encouraging, especially given the uncertain nature of the problem. Overall, this research demonstrates the ability of 3-D THT to provide high-resolution of structure and local K at a non-research site at the scale of a contaminant

  11. Quantitative, nondestructive estimates of coarse root biomass in a temperate pine forest using 3-D ground-penetrating radar (GPR)

    NASA Astrophysics Data System (ADS)

    Molon, Michelle; Boyce, Joseph I.; Arain, M. Altaf

    2017-01-01

    Coarse root biomass was estimated in a temperate pine forest using high-resolution (1 GHz) 3-D ground-penetrating radar (GPR). GPR survey grids were acquired across a 400 m2 area with varying line spacing (12.5 and 25 cm). Root volume and biomass were estimated directly from the 3-D radar volume by using isometric surfaces calculated with the marching cubes algorithm. Empirical relations between GPR reflection amplitude and root diameter were determined for 14 root segments (0.1-10 cm diameter) reburied in a 6 m2 experimental test plot and surveyed at 5-25 cm line spacing under dry and wet soil conditions. Reburied roots >1.4 cm diameter were detectable as continuous root structures with 5 cm line separation. Reflection amplitudes were strongly controlled by soil moisture and decreased by 40% with a twofold increase in soil moisture. GPR line intervals of 12.5 and 25 cm produced discontinuous mapping of roots, and GPR coarse root biomass estimates (0.92 kgC m-2) were lower than those obtained previously with a site-specific allometric equation due to nondetection of vertical roots and roots <1.5 cm diameter. The results show that coarse root volume and biomass can be estimated directly from interpolated 3-D GPR volumes by using a marching cubes approach, but mapping of roots as continuous structures requires high inline sampling and line density (<5 cm). The results demonstrate that 3-D GPR is viable approach for estimating belowground carbon and for mapping tree root architecture. This methodology can be applied more broadly in other disciplines (e.g., archaeology and civil engineering) for imaging buried structures.

  12. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  13. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.

  14. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  15. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2004-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  16. Real-time upper-body human pose estimation from depth data using Kalman filter for simulator

    NASA Astrophysics Data System (ADS)

    Lee, D.; Chi, S.; Park, C.; Yoon, H.; Kim, J.; Park, C. H.

    2014-08-01

    Recently, many studies show that an indoor horse riding exercise has a positive effect on promoting health and diet. However, if a rider has an incorrect posture, it will be the cause of back pain. In spite of this problem, there is only few research on analyzing rider's posture. Therefore, the purpose of this study is to estimate a rider pose from a depth image using the Asus's Xtion sensor in real time. In the experiments, we show the performance of our pose estimation algorithm in order to comparing the results between our joint estimation algorithm and ground truth data.

  17. Autonomous proximity operations using machine vision for trajectory control and pose estimation

    NASA Technical Reports Server (NTRS)

    Cleghorn, Timothy F.; Sternberg, Stanley R.

    1991-01-01

    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.

  18. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  19. Intensity of joints associated with an extensional fault zone: an estimation by poly3d .

    NASA Astrophysics Data System (ADS)

    Minelli, G.

    2003-04-01

    The presence and frequency of joints in sedimentary rocks strongly affects the mechanical and fluid flow properties of the host layers. Joints intensity is evaluated by spacing, S, the distance between neighbouring fractures, or by density, D = 1/S. Joint spacing in layered rocks is often linearly related to layer thickness T, with typical values of 0.5 T < S < 2.0 T . On the other hand, some field cases display very tight joints with S << T and nonlinear relations between spacing and thickness , most of these cases are related to joint system “genetically” related to a nearby fault zone. The present study by using the code Poly3D (Rock Fracture Project at Stanford), numerically explores the effect of the stress distribution in the neighbour of an extensional fault zone with respect to the mapped intensity of joints both in the hanging wall and in the foot wall of it (WILLEMSE, E. J. M., 1997; MARTEL, S. J, AND BOGER, W. A,; 1998). Poly3D is a C language computer program that calculates the displacements, strains and stresses induced in an elastic whole or half-space by planar, polygonal-shaped elements of displacement discontinuity (WILLEMSE, E. J. M., POLLARD, D. D., 2000) Dislocations of varying shapes may be combined to yield complex three-dimensional surfaces well-suited for modeling fractures, faults, and cavities in the earth's crust. The algebraic expressions for the elastic fields around a polygonal element are derived by superposing the solution for an angular dislocation in an elastic half-space. The field data have been collected in a quarry located close to Noci town (Puglia) by using the scan line methodology. In this quarry a platform limestone with a regular bedding with very few shale or marly intercalations displaced by a normal fault are exposed. The comparison between the mapped joints intensity and the calculated stress around the fault displays a good agreement. Nevertheless the intrinsic limitations (isotropic medium and elastic behaviour

  20. Effect of GIA models with 3D composite mantle viscosity on GRACE mass balance estimates for Antarctica

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Whitehouse, Pippa L.; Schrama, Ernst J. O.

    2015-03-01

    Seismic data indicate that there are large viscosity variations in the mantle beneath Antarctica. Consideration of such variations would affect predictions of models of Glacial Isostatic Adjustment (GIA), which are used to correct satellite measurements of ice mass change. However, most GIA models used for that purpose have assumed the mantle to be uniformly stratified in terms of viscosity. The goal of this study is to estimate the effect of lateral variations in viscosity on Antarctic mass balance estimates derived from the Gravity Recovery and Climate Experiment (GRACE) data. To this end, recently-developed global GIA models based on lateral variations in mantle temperature are tuned to fit constraints in the northern hemisphere and then compared to GPS-derived uplift rates in Antarctica. We find that these models can provide a better fit to GPS uplift rates in Antarctica than existing GIA models with a radially-varying (1D) rheology. When 3D viscosity models in combination with specific ice loading histories are used to correct GRACE measurements, mass loss in Antarctica is smaller than previously found for the same ice loading histories and their preferred 1D viscosity profiles. The variation in mass balance estimates arising from using different plausible realizations of 3D viscosity amounts to 20 Gt/yr for the ICE-5G ice model and 16 Gt/yr for the W12a ice model; these values are larger than the GRACE measurement error, but smaller than the variation arising from unknown ice history. While there exist 1D Earth models that can reproduce the total mass balance estimates derived using 3D Earth models, the spatial pattern of gravity rates can be significantly affected by 3D viscosity in a way that cannot be reproduced by GIA models with 1D viscosity. As an example, models with 1D viscosity always predict maximum gravity rates in the Ross Sea for the ICE-5G ice model, however, for one of the three preferred 3D models the maximum (for the same ice model) is found

  1. Application of optical 3D measurement on thin film buckling to estimate interfacial toughness

    NASA Astrophysics Data System (ADS)

    Jia, H. K.; Wang, S. B.; Li, L. A.; Wang, Z. Y.; Goudeau, P.

    2014-03-01

    The shape-from-focus (SFF) method has been widely studied as a passive depth recovery and 3D reconstruction method for digital images. An important step in SFF is the calculation of the focus level for different points in an image by using a focus measure. In this work, an image entropy-based focus measure is introduced into the SFF method to measure the 3D buckling morphology of an aluminum film on a polymethylmethacrylate (PMMA) substrate at a micro scale. Spontaneous film wrinkles and telephone-cord wrinkles are investigated after the deposition of a 300 nm thick aluminum film onto the PMMA substrate. Spontaneous buckling is driven by the highly compressive stress generated in the Al film during the deposition process. The interfacial toughness between metal films and substrates is an important parameter for the reliability of the film/substrate system. The height profiles of different sections across the telephone-cord wrinkle can be considered a straight-sided model with uniform width and height or a pinned circular model that has a delamination region characterized by a sequence of connected sectors. Furthermore, the telephone-cord geometry of the thin film can be used to calculate interfacial toughness. The instability of the finite element model is introduced to fit the buckling morphology obtained by SFF. The interfacial toughness is determined to be 0.203 J/m2 at a 70.4° phase angle from the straight-sided model and 0.105 J/m2 at 76.9° from the pinned circular model.

  2. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  3. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  4. CO2 mass estimation visible in time-lapse 3D seismic data from a saline aquifer and uncertainties

    NASA Astrophysics Data System (ADS)

    Ivanova, A.; Lueth, S.; Bergmann, P.; Ivandic, M.

    2014-12-01

    At Ketzin (Germany) the first European onshore pilot scale project for geological storage of CO2 was initiated in 2004. This project is multidisciplinary and includes 3D time-lapse seismic monitoring. A 3D pre-injection seismic survey was acquired in 2005. Then CO2 injection into a sandstone saline aquifer started at a depth of 650 m in 2008. A 1st 3D seismic repeat survey was acquired in 2009 after 22 kilotons had been injected. The imaged CO2 signature was concentrated around the injection well (200-300 m). A 2nd 3D seismic repeat survey was acquired in 2012 after 61 kilotons had been injected. The imaged CO2 signature further extended (100-200 m). The injection was terminated in 2013. Totally 67 kilotons of CO2 were injected. Time-lapse seismic processing, petrophysical data and geophysical logging on CO2 saturation have allowed for an estimate of the amount of CO2 visible in the seismic data. This estimate is dependent upon a choice of a number of parameters and contains a number of uncertainties. The main uncertainties are following. The constant reservoir porosity and CO2 density used for the estimation are probably an over-simplification since the reservoir is quite heterogeneous. May be velocity dispersion is present in the Ketzin reservoir rocks, but we do not consider it to be large enough that it could affect the mass of CO2 in our estimation. There are only a small number of direct petrophysical observations, providing a weak statistical basis for the determination of seismic velocities based on CO2 saturation and we have assumed that the petrophysical experiments were carried out on samples that are representative for the average properties of the whole reservoir. Finally, the most of the time delay values in the both 3D seismic repeat surveys within the amplitude anomaly are near the noise level of 1-2 ms, however a change of 1 ms in the time delay affects significantly the mass estimate, thus the choice of the time-delay cutoff is crucial. In spite

  5. Dynamics of errors in 3D motion estimation and implications for strain-tensor imaging in acoustic elastography

    NASA Astrophysics Data System (ADS)

    Bilgen, Mehmet

    2000-06-01

    For the purpose of quantifying the noise in acoustic elastography, a displacement covariance matrix is derived analytically for the cross-correlation based 3D motion estimator. Static deformation induced in tissue from an external mechanical source is represented by a second-order strain tensor. A generalized 3D model is introduced for the ultrasonic echo signals. The components of the covariance matrix are related to the variances of the displacement errors and the errors made in estimating the elements of the strain tensor. The results are combined to investigate the dependences of these errors on the experimental and signal-processing parameters as well as to determine the effects of one strain component on the estimation of the other. The expressions are evaluated for special cases of axial strain estimation in the presence of axial, axial-shear and lateral-shear type deformations in 2D. The signals are shown to decorrelate with any of these deformations, with strengths depending on the reorganization and interaction of tissue scatterers with the ultrasonic point spread function following the deformation. Conditions that favour the improvements in motion estimation performance are discussed, and advantages gained by signal companding and pulse compression are illustrated.

  6. Estimating the composition of hydrates from a 3D seismic dataset near Penghu Canyon on Chinese passive margin offshore Taiwan

    NASA Astrophysics Data System (ADS)

    Chi, Wu-Cheng

    2016-04-01

    A bottom-simulating reflector (BSR), representing the base of the gas hydrate stability zone, can be used to estimate geothermal gradients under seafloor. However, to derive temperature estimates at the BSR, the correct hydrate composition is needed to calculate the phase boundary. Here we applied the method by Minshull and Keddie to constrain the hydrate composition and the pore fluid salinity. We used a 3D seismic dataset offshore SW Taiwan to test the method. Different from previous studies, we have considered the effects of 3D topographic effects using finite element modelling and also depth-dependent thermal conductivity. Using a pore water salinity of 2% at the BSR depth as found from the nearby core samples, we successfully used 99% methane and 1% ethane gas hydrate phase boundary to derive a sub-bottom depth vs. temperature plot which is consistent with the seafloor temperature from in-situ measurements. The results are also consistent with geochemical analyses of the pore fluids. The derived regional geothermal gradient is 40.1oC/km, which is similar to 40oC/km used in the 3D finite element modelling used in this study. This study is among the first documented successful use of Minshull and Keddie's method to constrain seafloor gas hydrate composition.

  7. Experimental evaluation of the accuracy at the C-arm pose estimation with x-ray images.

    PubMed

    Thurauf, Sabine; Vogt, Florian; Hornung, Oliver; Korner, Mario; Nasseri, M Ali; Knoll, Alois; Thurauf, Sabine; Vogt, Florian; Hornung, Oliver; Korner, Mario; Nasseri, M Ali; Knoll, Alois; Thurauf, Sabine; Knoll, Alois; Korner, Mario; Vogt, Florian; Hornung, Oliver; Nasseri, M Ali

    2016-08-01

    C-arm X-ray systems need a high spatial accuracy for applications like cone beam computed tomography and 2D/3D overlay. One way to achieve the needed precision is a model-based calibration of the C-arm system. For such a calibration a kinematic and dynamic model of the system is constructed whose parameters are computed by pose measurements of the C-arm. Instead of common measurement systems used for a model-based calibration for robots like laser trackers, we use X-ray images of a calibration phantom to measure the C-arm pose. By the direct use of the imaging system, we overcome registration errors between the measurement device and the C-arm system. The C-arm pose measurement by X-ray imaging, the new measurement technique, has to be evaluated to check if the measurement accuracy is sufficient for the model-based calibration regarding the two mentioned applications. The scope of this work is a real world evaluation of the C-arm pose measurement accuracy with X-ray images of a calibration phantom using relative phantom movements and a laser tracker as ground truth.

  8. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  9. Sparse array 3-D ISAR imaging based on maximum likelihood estimation and CLEAN technique.

    PubMed

    Ma, Changzheng; Yeo, Tat Soon; Tan, Chee Seng; Tan, Hwee Siang

    2010-08-01

    Large 2-D sparse array provides high angular resolution microwave images but artifacts are also induced by the high sidelobes of the beam pattern, thus, limiting its dynamic range. CLEAN technique has been used in the literature to extract strong scatterers for use in subsequent signal cancelation (artifacts removal). However, the performance of DFT parameters estimation based CLEAN algorithm for the estimation of the signal amplitudes is known to be poor, and this affects the signal cancelation. In this paper, DFT is used only to provide the initial estimates, and the maximum likelihood parameters estimation method with steepest descent implementation is then used to improve the precision of the calculated scatterers positions and amplitudes. Time domain information is also used to reduce the sidelobe levels. As a result, clear, artifact-free images could be obtained. The effects of multiple reflections and rotation speed estimation error are also discussed. The proposed method has been verified using numerical simulations and it has been shown to be effective.

  10. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  11. Suspect Height Estimation Using the Faro Focus(3D) Laser Scanner.

    PubMed

    Johnson, Monique; Liscio, Eugene

    2015-11-01

    At present, very little research has been devoted to investigating the ability of laser scanning technology to accurately measure height from surveillance video. The goal of this study was to test the accuracy of one particular laser scanner to estimate suspect height from video footage. The known heights of 10 individuals were measured using an anthropometer. The individuals were then recorded on video walking along a predetermined path in a simulated crime scene environment both with and without headwear. The difference between the known heights and the estimated heights obtained from the laser scanner software were compared using a one-way t-test. The height estimates obtained from the software were not significantly different from the known heights whether individuals were wearing headwear (p = 0.186) or not (p = 0.707). Thus, laser scanning is one technique that could potentially be used by investigators to determine suspect height from video footage.

  12. Rigid and non-rigid geometrical transformations of a marker-cluster and their impact on bone-pose estimation.

    PubMed

    Bonci, T; Camomilla, V; Dumas, R; Chèze, L; Cappozzo, A

    2015-11-26

    When stereophotogrammetry and skin-markers are used, bone-pose estimation is jeopardised by the soft tissue artefact (STA). At marker-cluster level, this can be represented using a modal series of rigid (RT; translation and rotation) and non-rigid (NRT; homothety and scaling) geometrical transformations. The NRT has been found to be smaller than the RT and claimed to have a limited impact on bone-pose estimation. This study aims to investigate this matter and comparatively assessing the propagation of both STA components to bone-pose estimate, using different numbers of markers. Twelve skin-markers distributed over the anterior aspect of a thigh were considered and STA time functions were generated for each of them, as plausibly occurs during walking, using an ad hoc model and represented through the geometrical transformations. Using marker-clusters made of four to 12 markers affected by these STAs, and a Procrustes superimposition approach, bone-pose and the relevant accuracy were estimated. This was done also for a selected four marker-cluster affected by STAs randomly simulated by modifying the original STA NRT component, so that its energy fell in the range 30-90% of total STA energy. The pose error, which slightly decreased while increasing the number of markers in the marker-cluster, was independent from the NRT amplitude, and was always null when the RT component was removed. It was thus demonstrated that only the RT component impacts pose estimation accuracy and should thus be accounted for when designing algorithms aimed at compensating for STA.

  13. Growth trajectories of the human fetal brain tissues estimated from 3D reconstructed in utero MRI

    PubMed Central

    Scott, Julia A.; Habas, Piotr A.; Kim, Kio; Rajagopalan, Vidya; Hamzelou, Kia S.; Corbett-Detig, James M.; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin

    2012-01-01

    In the latter half of gestation (20 to 40 gestational weeks), human brain growth accelerates in conjunction with cortical folding and the deceleration of ventricular zone progenitor cell proliferation. These processes are reflected in changes in the volume of respective fetal tissue zones. Thus far, growth trajectories of the fetal tissue zones have been extracted primarily from 2D measurements on histological sections and magnetic resonance imaging (MRI). In this study, the volumes of major fetal zones—cortical plate (CP), subplate and intermediate zone (SP+IZ), germinal matrix (GMAT), deep gray nuclei (DG), and ventricles (VENT)—are calculated from automatic segmentation of motion-corrected, 3D reconstructed MRI. We analyzed 48 T2-weighted MRI scans from 39 normally developing fetuses in utero between 20.57 and 31.14 gestational weeks (GW). The supratentorial volume (STV) increased linearly at a rate of 15.22% per week. The SP+IZ (14.75% per week) and DG (15.56% per week) volumes increased at similar rates. The CP increased at a greater relative rate (18.00% per week), while the VENT (9.18% per week) changed more slowly. Therefore, CP increased as a fraction of STV and the VENT fraction declined. The total GMAT volume slightly increased then decreased after 25 GW. We did not detect volumetric sexual dimorphisms or total hemispheric volume asymmetries, which may emerge later in gestation. Further application of the automated fetal brain segmentation to later gestational ages will bridge the gap between volumetric studies of premature brain development and normal brain development in utero. PMID:21530634

  14. Growth trajectories of the human fetal brain tissues estimated from 3D reconstructed in utero MRI.

    PubMed

    Scott, Julia A; Habas, Piotr A; Kim, Kio; Rajagopalan, Vidya; Hamzelou, Kia S; Corbett-Detig, James M; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-08-01

    In the latter half of gestation (20-40 gestational weeks), human brain growth accelerates in conjunction with cortical folding and the deceleration of ventricular zone progenitor cell proliferation. These processes are reflected in changes in the volume of respective fetal tissue zones. Thus far, growth trajectories of the fetal tissue zones have been extracted primarily from 2D measurements on histological sections and magnetic resonance imaging (MRI). In this study, the volumes of major fetal zones-cortical plate (CP), subplate and intermediate zone (SP+IZ), germinal matrix (GMAT), deep gray nuclei (DG), and ventricles (VENT)--are calculated from automatic segmentation of motion-corrected, 3D reconstructed MRI. We analyzed 48 T2-weighted MRI scans from 39 normally developing fetuses in utero between 20.57 and 31.14 gestational weeks (GW). The supratentorial volume (STV) increased linearly at a rate of 15.22% per week. The SP+IZ (14.75% per week) and DG (15.56% per week) volumes increased at similar rates. The CP increased at a greater relative rate (18.00% per week), while the VENT (9.18% per week) changed more slowly. Therefore, CP increased as a fraction of STV and the VENT fraction declined. The total GMAT volume slightly increased then decreased after 25 GW. We did not detect volumetric sexual dimorphisms or total hemispheric volume asymmetries, which may emerge later in gestation. Further application of the automated fetal brain segmentation to later gestational ages will bridge the gap between volumetric studies of premature brain development and normal brain development in utero.

  15. Novel methods for estimating 3D distributions of radioactive isotopes in materials

    NASA Astrophysics Data System (ADS)

    Iwamoto, Y.; Kataoka, J.; Kishimoto, A.; Nishiyama, T.; Taya, T.; Okochi, H.; Ogata, H.; Yamamoto, S.

    2016-09-01

    In recent years, various gamma-ray visualization techniques, or gamma cameras, have been proposed. These techniques are extremely effective for identifying "hot spots" or regions where radioactive isotopes are accumulated. Examples of such would be nuclear-disaster-affected areas such as Fukushima or the vicinity of nuclear reactors. However, the images acquired with a gamma camera do not include distance information between radioactive isotopes and the camera, and hence are "degenerated" in the direction of the isotopes. Moreover, depth information in the images is lost when the isotopes are embedded in materials, such as water, sand, and concrete. Here, we propose two methods of obtaining depth information of radioactive isotopes embedded in materials by comparing (1) their spectra and (2) images of incident gamma rays scattered by the materials and direct gamma rays. In the first method, the spectra of radioactive isotopes and the ratios of scattered to direct gamma rays are obtained. We verify experimentally that the ratio increases with increasing depth, as predicted by simulations. Although the method using energy spectra has been studied for a long time, an advantage of our method is the use of low-energy (50-150 keV) photons as scattered gamma rays. In the second method, the spatial extent of images obtained for direct and scattered gamma rays is compared. By performing detailed Monte Carlo simulations using Geant4, we verify that the spatial extent of the position where gamma rays are scattered increases with increasing depth. To demonstrate this, we are developing various gamma cameras to compare low-energy (scattered) gamma-ray images with fully photo-absorbed gamma-ray images. We also demonstrate that the 3D reconstruction of isotopes/hotspots is possible with our proposed methods. These methods have potential applications in the medical fields, and in severe environments such as the nuclear-disaster-affected areas in Fukushima.

  16. Leaf Area Index Estimation in Vineyards from Uav Hyperspectral Data, 2d Image Mosaics and 3d Canopy Surface Models

    NASA Astrophysics Data System (ADS)

    Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.

    2015-08-01

    The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.

  17. Guided wave-based J-integral estimation for dynamic stress intensity factors using 3D scanning laser Doppler vibrometry

    NASA Astrophysics Data System (ADS)

    Ayers, J.; Owens, C. T.; Liu, K. C.; Swenson, E.; Ghoshal, A.; Weiss, V.

    2013-01-01

    The application of guided waves to interrogate remote areas of structural components has been researched extensively in characterizing damage. However, there exists a sparsity of work in using piezoelectric transducer-generated guided waves as a method of assessing stress intensity factors (SIF). This quantitative information enables accurate estimation of the remaining life of metallic structures exhibiting cracks, such as military and commercial transport vehicles. The proposed full wavefield approach, based on 3D laser vibrometry and piezoelectric transducer-generated guided waves, provides a practical means for estimation of dynamic stress intensity factors (DSIF) through local strain energy mapping via the J-integral. Strain energies and traction vectors can be conveniently estimated from wavefield data recorded using 3D laser vibrometry, through interpolation and subsequent spatial differentiation of the response field. Upon estimation of the Jintegral, it is possible to obtain the corresponding DSIF terms. For this study, the experimental test matrix consists of aluminum plates with manufactured defects representing canonical elliptical crack geometries under uniaxial tension that are excited by surface mounted piezoelectric actuators. The defects' major to minor axes ratios vary from unity to approximately 133. Finite element simulations are compared to experimental results and the relative magnitudes of the J-integrals are examined.

  18. Evaluation of Scalar Value Estimation Techniques for 3D Medical Imaging

    DTIC Science & Technology

    1991-12-01

    our hearts, we grew stronger in our faith during this time. Thank you Lord for making this a learning and at times an enjoyable experience for me and...Figure .5.3. Cell subdivision. factor 2. with t rieubic interpolation estimating minor- voxel valhte. and marchins cubes extraction of hyperbolol’i surface

  19. Estimation of regeneration coverage in a temperate forest by 3D segmentation using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Amiri, Nina; Yao, Wei; Heurich, Marco; Krzystek, Peter; Skidmore, Andrew K.

    2016-10-01

    Forest understory and regeneration are important factors in sustainable forest management. However, understanding their spatial distribution in multilayered forests requires accurate and continuously updated field data, which are difficult and time-consuming to obtain. Therefore, cost-efficient inventory methods are required, and airborne laser scanning (ALS) is a promising tool for obtaining such information. In this study, we examine a clustering-based 3D segmentation in combination with ALS data for regeneration coverage estimation in a multilayered temperate forest. The core of our method is a two-tiered segmentation of the 3D point clouds into segments associated with regeneration trees. First, small parts of trees (super-voxels) are constructed through mean shift clustering, a nonparametric procedure for finding the local maxima of a density function. In the second step, we form a graph based on the mean shift clusters and merge them into larger segments using the normalized cut algorithm. These segments are used to obtain regeneration coverage of the target plot. Results show that, based on validation data from field inventory and terrestrial laser scanning (TLS), our approach correctly estimates up to 70% of regeneration coverage across the plots with different properties, such as tree height and tree species. The proposed method is negatively impacted by the density of the overstory because of decreasing ground point density. In addition, the estimated coverage has a strong relationship with the overstory tree species composition.

  20. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  1. A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations

    DTIC Science & Technology

    2015-03-26

    Research Projects Agency (DARPA), SpaceX , and Orbital Sciences Corporation aim to advance prox- ops technology and demonstrate capability to rendezvous...relative pose problem [11]. The SpaceX Dragon capsule and Cygnus by Orbital Sciences Corporation both send unmanned resupply capsules to the

  2. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  3. Instantaneous helical axis estimation from 3-D video data in neck kinematics for whiplash diagnostics.

    PubMed

    Woltring, H J; Long, K; Osterbauer, P J; Fuhr, A W

    1994-12-01

    To date, the diagnosis of whiplash injuries has been very difficult and largely based on subjective, clinical assessment. The work by Winters and Peles Multiple Muscle Systems--Biomechanics and Movement Organization. Springer, New York (1990) suggests that the use of finite helical axes (FHAs) in the neck may provide an objective assessment tool for neck mobility. Thus, the position of the FHA describing head-trunk motion may allow discrimination between normal and pathological cases such as decreased mobility in particular cervical joints. For noisy, unsmoothed data, the FHAs must be taken over rather large angular intervals if the FHAs are to be reconstructed with sufficient accuracy; in the Winters and Peles study, these intervals were approximately 10 degrees. in order to study the movements' microstructure, the present investigation uses instantaneous helical axes (IHAs) estimated from low-pass smoothed video data. Here, the small-step noise sensitivity of the FHA no longer applies, and proper low-pass filtering allows estimation of the IHA even for small rotation velocity omega of the moving neck. For marker clusters mounted on the head and trunk, technical system validation showed that the IHAs direction dispersions were on the order of one degree, while their position dispersions were on the order of 1 mm, for low-pass cut-off frequencies of a few Hz (the dispersions were calculated from omega-weighted errors, in order to account for the adverse effects of vanishing omega). Various simple, planar models relating the instantaneous, 2-D centre of rotation with the geometry and kinematics of a multi-joint neck model are derived, in order to gauge the utility of the FHA and IHA approaches. Some preliminary results on asymptomatic and pathological subjects are provided, in terms of the 'ruled surface' formed by sampled IHAs and of their piercing points through the mid-sagittal plane during a prescribed flexion-extension movement of the neck.

  4. Assessment of intraoperative 3D imaging alternatives for IOERT dose estimation.

    PubMed

    García-Vázquez, Verónica; Marinetto, Eugenio; Guerra, Pedro; Valdivieso-Casique, Manlio Fabio; Calvo, Felipe Ángel; Alvarado-Vásquez, Eduardo; Sole, Claudio Vicente; Vosburgh, Kirby Gannett; Desco, Manuel; Pascau, Javier

    2016-08-23

    Intraoperative electron radiation therapy (IOERT) involves irradiation of an unresected tumour or a post-resection tumour bed. The dose distribution is calculated from a preoperative computed tomography (CT) study acquired using a CT simulator. However, differences between the actual IOERT field and that calculated from the preoperative study arise as a result of patient position, surgical access, tumour resection and the IOERT set-up. Intraoperative CT imaging may then enable a more accurate estimation of dose distribution. In this study, we evaluated three kilovoltage (kV) CT scanners with the ability to acquire intraoperative images. Our findings indicate that current IOERT plans may be improved using data based on actual anatomical conditions during radiation. The systems studied were two portable systems ("O-arm", a cone-beam CT [CBCT] system, and "BodyTom", a multislice CT [MSCT] system) and one CBCT integrated in a conventional linear accelerator (LINAC) ("TrueBeam"). TrueBeam and BodyTom showed good results, as the gamma pass rates of their dose distributions compared to the gold standard (dose distributions calculated from images acquired with a CT simulator) were above 97% in most cases. The O-arm yielded a lower percentage of voxels fulfilling gamma criteria owing to its reduced field of view (which left it prone to truncation artefacts). Our results show that the images acquired using a portable CT or even a LINAC with on-board kV CBCT could be used to estimate the dose of IOERT and improve the possibility to evaluate and register the treatment administered to the patient.

  5. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  6. Reproducing electric field observations during magnetic storms by means of rigorous 3-D modelling and distortion matrix co-estimation

    NASA Astrophysics Data System (ADS)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2014-12-01

    Electric fields induced in the conducting Earth by geomagnetic disturbances drive currents in power transmission grids, telecommunication lines or buried pipelines, which can cause service disruptions. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we revisit a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a magnetospheric source model described by low-degree spherical harmonics from observatory magnetic data. The actual electric field, however, is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and modelled electric fields. Using data of six magnetic storms that occurred between 2000 and 2003, we estimate distortion matrices for observatory sites onshore and on the ocean bottom. Reliable estimates are obtained, and the modellings are found to explain up to 90% of the measurements. We further find that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of the shape of electric field time series during magnetic storms. Since the method relies on precomputed responses of a 3-D Earth to geomagnetic disturbances, which can be recycled for each storm, the required computational resources are negligible. Our approach is thus suitable for real-time prediction of geomagnetically induced currents by combining it with reliable forecasts of the source field.

  7. A computational model for estimating tumor margins in complementary tactile and 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Shamsil, Arefin; Escoto, Abelardo; Naish, Michael D.; Patel, Rajni V.

    2016-03-01

    Conventional surgical methods are effective for treating lung tumors; however, they impose high trauma and pain to patients. Minimally invasive surgery is a safer alternative as smaller incisions are required to reach the lung; however, it is challenging due to inadequate intraoperative tumor localization. To address this issue, a mechatronic palpation device was developed that incorporates tactile and ultrasound sensors capable of acquiring surface and cross-sectional images of palpated tissue. Initial work focused on tactile image segmentation and fusion of position-tracked tactile images, resulting in a reconstruction of the palpated surface to compute the spatial locations of underlying tumors. This paper presents a computational model capable of analyzing orthogonally-paired tactile and ultrasound images to compute the surface circumference and depth margins of a tumor. The framework also integrates an error compensation technique and an algebraic model to align all of the image pairs and to estimate the tumor depths within the tracked thickness of a palpated tissue. For validation, an ex vivo experimental study was conducted involving the complete palpation of 11 porcine liver tissues injected with iodine-agar tumors of varying sizes and shapes. The resulting tactile and ultrasound images were then processed using the proposed model to compute the tumor margins and compare them to fluoroscopy based physical measurements. The results show a good negative correlation (r = -0.783, p = 0.004) between the tumor surface margins and a good positive correlation (r = 0.743, p = 0.009) between the tumor depth margins.

  8. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    NASA Astrophysics Data System (ADS)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  9. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  10. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  11. Far and proximity maneuvers of a constellation of service satellites and autonomous pose estimation of customer satellite using machine vision

    NASA Astrophysics Data System (ADS)

    Arantes, Gilberto, Jr.; Marconi Rocco, Evandro; da Fonseca, Ijar M.; Theil, Stephan

    2010-05-01

    Space robotics has a substantial interest in achieving on-orbit satellite servicing operations autonomously, e.g. rendezvous and docking/berthing (RVD) with customer and malfunctioning satellites. An on-orbit servicing vehicle requires the ability to estimate the position and attitude in situations whenever the targets are uncooperative. Such situation comes up when the target is damaged. In this context, this work presents a robust autonomous pose system applied to RVD missions. Our approach is based on computer vision, using a single camera and some previous knowledge of the target, i.e. the customer spacecraft. A rendezvous analysis mission tool for autonomous service satellite has been developed and presented, for far maneuvers, e.g. distance above 1 km from the target, and close maneuvers. The far operations consist of orbit transfer using the Lambert formulation. The close operations include the inspection phase (during which the pose estimation is computed) and the final approach phase. Our approach is based on the Lambert problem for far maneuvers and the Hill equations are used to simulate and analyze the approaching and final trajectory between target and chase during the last phase of the rendezvous operation. A method for optimally estimating the relative orientation and position between camera system and target is presented in detail. The target is modelled as an assembly of points. The pose of the target is represented by dual quaternion in order to develop a simple quadratic error function in such a way that the pose estimation task becomes a least square minimization problem. The problem of pose is solved and some methods of non-linear square optimization (Newton, Newton-Gauss, and Levenberg-Marquard) are compared and discussed in terms of accuracy and computational cost.

  12. Estimation of Pulmonary Motion in Healthy Subjects and Patients with Intrathoracic Tumors Using 3D-Dynamic MRI: Initial Results

    PubMed Central

    Schoebinger, Max; Herth, Felix; Tuengerthal, Siegfried; Meinzer, Heinz-Peter; Kauczor, Hans-Ulrich

    2009-01-01

    Objective To estimate a new technique for quantifying regional lung motion using 3D-MRI in healthy volunteers and to apply the technique in patients with intra- or extrapulmonary tumors. Materials and Methods Intraparenchymal lung motion during a whole breathing cycle was quantified in 30 healthy volunteers using 3D-dynamic MRI (FLASH [fast low angle shot] 3D, TRICKS [time-resolved interpolated contrast kinetics]). Qualitative and quantitative vector color maps and cumulative histograms were performed using an introduced semiautomatic algorithm. An analysis of lung motion was performed and correlated with an established 2D-MRI technique for verification. As a proof of concept, the technique was applied in five patients with non-small cell lung cancer (NSCLC) and 5 patients with malignant pleural mesothelioma (MPM). Results The correlation between intraparenchymal lung motion of the basal lung parts and the 2D-MRI technique was significant (r = 0.89, p < 0.05). Also, the vector color maps quantitatively illustrated regional lung motion in all healthy volunteers. No differences were observed between both hemithoraces, which was verified by cumulative histograms. The patients with NSCLC showed a local lack of lung motion in the area of the tumor. In the patients with MPM, there was global diminished motion of the tumor bearing hemithorax, which improved siginificantly after chemotherapy (CHT) (assessed by the 2D- and 3D-techniques) (p < 0.01). Using global spirometry, an improvement could also be shown (vital capacity 2.9 ± 0.5 versus 3.4 L ± 0.6, FEV1 0.9 ± 0.2 versus 1.4 ± 0.2 L) after CHT, but this improvement was not significant. Conclusion A 3D-dynamic MRI is able to quantify intraparenchymal lung motion. Local and global parenchymal pathologies can be precisely located and might be a new tool used to quantify even slight changes in lung motion (e.g. in therapy monitoring, follow-up studies or even benign lung diseases). PMID:19885311

  13. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  14. On the Estimation Accuracy of the 3D Body Center of Mass Trajectory during Human Locomotion: Inverse vs. Forward Dynamics

    PubMed Central

    Pavei, Gaspare; Seminati, Elena; Cazzola, Dario; Minetti, Alberto E.

    2017-01-01

    The dynamics of body center of mass (BCoM) 3D trajectory during locomotion is crucial to the mechanical understanding of the different gaits. Forward Dynamics (FD) obtains BCoM motion from ground reaction forces while Inverse Dynamics (ID) estimates BCoM position and speed from motion capture of body segments. These two techniques are widely used by the literature on the estimation of BCoM. Despite the specific pros and cons of both methods, FD is less biased and considered as the golden standard, while ID estimates strongly depend on the segmental model adopted to schematically represent the moving body. In these experiments a single subject walked, ran, (uni- and bi-laterally) skipped, and race-walked at a wide range of speeds on a treadmill with force sensors underneath. In all conditions a simultaneous motion capture (8 cameras, 36 markers) took place. 3D BCoM trajectories computed according to five marker set models of ID have been compared to the one obtained by FD on the same (about 2,700) strides. Such a comparison aims to check the validity of the investigated models to capture the “true” dynamics of gaits in terms of distance between paths, mechanical external work and energy recovery. Results allow to conclude that: (1) among gaits, race walking is the most critical in being described by ID, (2) among the investigated segmental models, those capturing the motion of four limbs and trunk more closely reproduce the subtle temporal and spatial changes of BCoM trajectory within the strides of most gaits, (3) FD-ID discrepancy in external work is speed dependent within a gait in the most unsuccessful models, and (4) the internal work is not affected by the difference in BCoM estimates. PMID:28337148

  15. 3D Wind Reconstruction and Turbulence Estimation in the Boundary Layer from Doppler Lidar Measurements using Particle Method

    NASA Astrophysics Data System (ADS)

    Rottner, L.; Baehr, C.

    2014-12-01

    Turbulent phenomena in the atmospheric boundary layer (ABL) are characterized by small spatial and temporal scales which make them difficult to observe and to model.New remote sensing instruments, like Doppler Lidar, give access to fine and high-frequency observations of wind in the ABL. This study suggests to use a method of nonlinear estimation based on these observations to reconstruct 3D wind in a hemispheric volume, and to estimate atmospheric turbulent parameters. The wind observations are associated to particle systems which are driven by a local turbulence model. The particles have both fluid and stochastic properties. Therefore, spatial averages and covariances may be deduced from the particles. Among the innovative aspects, we point out the absence of the common hypothesis of stationary-ergodic turbulence and the non-use of particle model closure hypothesis. Every time observations are available, 3D wind is reconstructed and turbulent parameters such as turbulent kinectic energy, dissipation rate, and Turbulent Intensity (TI) are provided. This study presents some results obtained using real wind measurements provided by a five lines of sight Lidar. Compared with classical methods (e.g. eddy covariance) our technic renders equivalent long time results. Moreover it provides finer and real time turbulence estimations. To assess this new method, we suggest computing independently TI using different observation types. First anemometer data are used to have TI reference.Then raw and filtered Lidar observations have also been compared. The TI obtained from raw data is significantly higher than the reference one, whereas the TI estimated with the new algorithm has the same order.In this study we have presented a new class of algorithm to reconstruct local random media. It offers a new way to understand turbulence in the ABL, in both stable or convective conditions. Later, it could be used to refine turbulence parametrization in meteorological meso-scale models.

  16. On the Estimation Accuracy of the 3D Body Center of Mass Trajectory during Human Locomotion: Inverse vs. Forward Dynamics.

    PubMed

    Pavei, Gaspare; Seminati, Elena; Cazzola, Dario; Minetti, Alberto E

    2017-01-01

    The dynamics of body center of mass (BCoM) 3D trajectory during locomotion is crucial to the mechanical understanding of the different gaits. Forward Dynamics (FD) obtains BCoM motion from ground reaction forces while Inverse Dynamics (ID) estimates BCoM position and speed from motion capture of body segments. These two techniques are widely used by the literature on the estimation of BCoM. Despite the specific pros and cons of both methods, FD is less biased and considered as the golden standard, while ID estimates strongly depend on the segmental model adopted to schematically represent the moving body. In these experiments a single subject walked, ran, (uni- and bi-laterally) skipped, and race-walked at a wide range of speeds on a treadmill with force sensors underneath. In all conditions a simultaneous motion capture (8 cameras, 36 markers) took place. 3D BCoM trajectories computed according to five marker set models of ID have been compared to the one obtained by FD on the same (about 2,700) strides. Such a comparison aims to check the validity of the investigated models to capture the "true" dynamics of gaits in terms of distance between paths, mechanical external work and energy recovery. Results allow to conclude that: (1) among gaits, race walking is the most critical in being described by ID, (2) among the investigated segmental models, those capturing the motion of four limbs and trunk more closely reproduce the subtle temporal and spatial changes of BCoM trajectory within the strides of most gaits, (3) FD-ID discrepancy in external work is speed dependent within a gait in the most unsuccessful models, and (4) the internal work is not affected by the difference in BCoM estimates.

  17. Performance Evaluation of a Pose Estimation Method based on the SwissRanger SR4000

    DTIC Science & Technology

    2012-08-01

    however, not suitable for navigating a small robot. Commercially available Flash LIDAR now has sufficient accuracy for robotic application . A...499978-1-4673-1278-3/12/$31.00 ©2012 IEEE Proceedings of 2012 IEEE International Conference on Mechatronics and Automation August 5 - 8, Chengdu, China... theory , the pose change between two image frames can be computed from three matched data points. This scheme only works well with noise-free range

  18. Influence of the Alveolar Cleft Type on Preoperative Estimation Using 3D CT Assessment for Alveolar Cleft

    PubMed Central

    Choi, Hang Suk; Choi, Hyun Gon; Kim, Soon Heum; Park, Hyung Jun; Shin, Dong Hyeok; Jo, Dong In; Kim, Cheol Keun

    2012-01-01

    Background The bone graft for the alveolar cleft has been accepted as one of the essential treatments for cleft lip patients. Precise preoperative measurement of the architecture and size of the bone defect in alveolar cleft has been considered helpful for increasing the success rate of bone grafting because those features may vary with the cleft type. Recently, some studies have reported on the usefulness of three-dimensional (3D) computed tomography (CT) assessment of alveolar bone defect; however, no study on the possible implication of the cleft type on the difference between the presumed and actual value has been conducted yet. We aimed to evaluate the clinical predictability of such measurement using 3D CT assessment according to the cleft type. Methods The study consisted of 47 pediatric patients. The subjects were divided according to the cleft type. CT was performed before the graft operation and assessed using image analysis software. The statistical significance of the difference between the preoperative estimation and intraoperative measurement was analyzed. Results The difference between the preoperative and intraoperative values were -0.1±0.3 cm3 (P=0.084). There was no significant intergroup difference, but the groups with a cleft palate showed a significant difference of -0.2±0.3 cm3 (P<0.05). Conclusions Assessment of the alveolar cleft volume using 3D CT scan data and image analysis software can help in selecting the optimal graft procedure and extracting the correct volume of cancellous bone for grafting. Considering the cleft type, it would be helpful to extract an additional volume of 0.2 cm3 in the presence of a cleft palate. PMID:23094242

  19. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  20. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces.

    PubMed

    Abbott, W W; Faisal, A A

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s(-1), more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark--the control of the video arcade game 'Pong'.

  1. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  2. Effects of computing parameters and measurement locations on the estimation of 3D NPS in non-stationary MDCT images.

    PubMed

    Miéville, Frédéric A; Bolard, Gregory; Bulling, Shelley; Gudinchet, François; Bochud, François O; Verdun, François R

    2013-11-01

    The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.

  3. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    NASA Astrophysics Data System (ADS)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  4. Baseline Face Detection, Head Pose Estimation, and Coarse Direction Detection for Facial Data in the SHRP2 Naturalistic Driving Study

    SciTech Connect

    Paone, Jeffrey R; Bolme, David S; Ferrell, Regina Kay; Aykac, Deniz; Karnowski, Thomas Paul

    2015-01-01

    Keeping a driver focused on the road is one of the most critical steps in insuring the safe operation of a vehicle. The Strategic Highway Research Program 2 (SHRP2) has over 3,100 recorded videos of volunteer drivers during a period of 2 years. This extensive naturalistic driving study (NDS) contains over one million hours of video and associated data that could aid safety researchers in understanding where the driver s attention is focused. Manual analysis of this data is infeasible, therefore efforts are underway to develop automated feature extraction algorithms to process and characterize the data. The real-world nature, volume, and acquisition conditions are unmatched in the transportation community, but there are also challenges because the data has relatively low resolution, high compression rates, and differing illumination conditions. A smaller dataset, the head pose validation study, is available which used the same recording equipment as SHRP2 but is more easily accessible with less privacy constraints. In this work we report initial head pose accuracy using commercial and open source face pose estimation algorithms on the head pose validation data set.

  5. Real-time human pose estimation and gesture recognition from depth images using superpixels and SVM classifier.

    PubMed

    Kim, Hanguen; Lee, Sangwon; Lee, Dongsung; Choi, Soonmin; Ju, Jinsun; Myung, Hyun

    2015-05-26

    In this paper, we present human pose estimation and gesture recognition algorithms that use only depth information. The proposed methods are designed to be operated with only a CPU (central processing unit), so that the algorithm can be operated on a low-cost platform, such as an embedded board. The human pose estimation method is based on an SVM (support vector machine) and superpixels without prior knowledge of a human body model. In the gesture recognition method, gestures are recognized from the pose information of a human body. To recognize gestures regardless of motion speed, the proposed method utilizes the keyframe extraction method. Gesture recognition is performed by comparing input keyframes with keyframes in registered gestures. The gesture yielding the smallest comparison error is chosen as a recognized gesture. To prevent recognition of gestures when a person performs a gesture that is not registered, we derive the maximum allowable comparison errors by comparing each registered gesture with the other gestures. We evaluated our method using a dataset that we generated. The experiment results show that our method performs fairly well and is applicable in real environments.

  6. A hybrid 3D-Var data assimilation scheme for joint state and parameter estimation: application to morphodynamic modelling

    NASA Astrophysics Data System (ADS)

    Smith, P.; Nichols, N. K.; Dance, S.

    2011-12-01

    Data assimilation is typically used to provide initial conditions for state estimation; combining model predictions with observational data to produce an updated model state that most accurately characterises the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. However, even with perfect initial data, inaccurate representation of model parameters will lead to the growth of model error and therefore affect the ability of our model to accurately predict the true system state. A key question in model development is how to estimate parameters a priori. In most cases, parameter estimation is addressed as a separate issue to state estimation and model calibration is performed offline in a separate calculation. Here we demonstrate how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state as part of the assimilation process. We present a novel hybrid data assimilation algorithm developed for application to parameter estimation in morphodynamic models. The new approach is based on a computationally inexpensive 3D-Var scheme, where the specification of the covariance matrices is crucial for success. For combined state-parameter estimation, it is particularly important that the cross-covariances between the parameters and the state are given a good a priori specification. Early experiments indicated that in order to yield reliable estimates of the true parameters, a flow dependent representation of the state-parameter cross covariances is required. By combining ideas from 3D-Var and the extended Kalman filter we have developed a novel hybrid assimilation scheme that captures the flow dependent nature of the state-parameter cross covariances without the computational expense of explicitly propagating the full system covariance matrix. We will give details of the formulation of this

  7. Towards patient-specific modeling of mitral valve repair: 3D transesophageal echocardiography-derived parameter estimation.

    PubMed

    Zhang, Fan; Kanik, Jingjing; Mansi, Tommaso; Voigt, Ingmar; Sharma, Puneet; Ionasec, Razvan Ioan; Subrahmanyan, Lakshman; Lin, Ben A; Sugeng, Lissa; Yuh, David; Comaniciu, Dorin; Duncan, James

    2017-01-01

    Transesophageal echocardiography (TEE) is routinely used to provide important qualitative and quantitative information regarding mitral regurgitation. Contemporary planning of surgical mitral valve repair, however, still relies heavily upon subjective predictions based on experience and intuition. While patient-specific mitral valve modeling holds promise, its effectiveness is limited by assumptions that must be made about constitutive material properties. In this paper, we propose and develop a semi-automated framework that combines machine learning image analysis with geometrical and biomechanical models to build a patient-specific mitral valve representation that incorporates image-derived material properties. We use our computational framework, along with 3D TEE images of the open and closed mitral valve, to estimate values for chordae rest lengths and leaflet material properties. These parameters are initialized using generic values and optimized to match the visualized deformation of mitral valve geometry between the open and closed states. Optimization is achieved by minimizing the summed Euclidean distances between the estimated and image-derived closed mitral valve geometry. The spatially varying material parameters of the mitral leaflets are estimated using an extended Kalman filter to take advantage of the temporal information available from TEE. This semi-automated and patient-specific modeling framework was tested on 15 TEE image acquisitions from 14 patients. Simulated mitral valve closures yielded average errors (measured by point-to-point Euclidean distances) of 1.86 ± 1.24 mm. The estimated material parameters suggest that the anterior leaflet is stiffer than the posterior leaflet and that these properties vary between individuals, consistent with experimental observations described in the literature.

  8. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  9. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    NASA Astrophysics Data System (ADS)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  10. Estimation of the environmental risk posed by landfills using chemical, microbiological and ecotoxicological testing of leachates.

    PubMed

    Matejczyk, Marek; Płaza, Grażyna A; Nałęcz-Jawecki, Grzegorz; Ulfig, Krzysztof; Markowska-Szczupak, Agata

    2011-02-01

    parameters of the landfill leachates should be analyzed together to assess the environmental risk posed by landfill emissions.

  11. AN INFORMATIC APPROACH TO ESTIMATING ECOLOGICAL RISKS POSED BY PHARMACEUTICAL USE

    EPA Science Inventory

    A new method for estimating risks of human prescription pharmaceuticals based on information found in regulatory filings as well as scientific and trade literature is described in a presentation at the Pharmaceuticals in the Environment Workshop in Las Vegas, NV, August 23-25, 20...

  12. Real-time pose estimation of devices from x-ray images: Application to x-ray/echo registration for cardiac interventions.

    PubMed

    Hatt, Charles R; Speidel, Michael A; Raval, Amish N

    2016-12-01

    In recent years, registration between x-ray fluoroscopy (XRF) and transesophageal echocardiography (TEE) has been rapidly developed, validated, and translated to the clinic as a tool for advanced image guidance of structural heart interventions. This technology relies on accurate pose-estimation of the TEE probe via standard 2D/3D registration methods. It has been shown that latencies caused by slow registrations can result in errors during untracked frames, and a real-time ( > 15 hz) tracking algorithm is needed to minimize these errors. This paper presents two novel similarity metrics designed for accurate, robust, and extremely fast pose-estimation of devices from XRF images: Direct Splat Correlation (DSC) and Patch Gradient Correlation (PGC). Both metrics were implemented in CUDA C, and validated on simulated and clinical datasets against prior methods presented in the literature. It was shown that by combining DSC and PGC in a hybrid method (HYB), target registration errors comparable to previously reported methods were achieved, but at much higher speeds and lower failure rates. In simulated datasets, the proposed HYB method achieved a median projected target registration error (pTRE) of 0.33 mm and a mean registration frame-rate of 12.1 hz, while previously published methods produced median pTREs greater than 1.5 mm and mean registration frame-rates less than 4 hz. In clinical datasets, the HYB method achieved a median pTRE of 1.1 mm and a mean registration frame-rate of 20.5 hz, while previously published methods produced median pTREs greater than 1.3 mm and mean registration frame-rates less than 12 hz. The proposed hybrid method also had much lower failure rates than previously published methods.

  13. Real-time pose estimation of devices from x-ray images: Application to x-ray/echo registration for cardiac interventions

    PubMed Central

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2016-01-01

    In recent years, registration between x-ray fluoroscopy (XRF) and transesophageal echocardiography (TEE) has been rapidly developed, validated, and translated to the clinic as a tool for advanced image guidance of structural heart interventions. This technology relies on accurate pose-estimation of the TEE probe via standard 2D/3D registration methods. It has been shown that latencies caused by slow registrations can result in errors during untracked frames, and a real-time (> 15 hz) tracking algorithm is needed to minimize these errors. This paper presents two novel similarity metrics designed for accurate, robust, and extremely fast pose-estimation of devices from XRF images: Direct Splat Correlation (DSC) and Patch Gradient Correlation (PGC). Both metrics were implemented in CUDA C, and validated on simulated and clinical datasets against prior methods presented in the literature. It was shown that by combining DSC and PGC in a hybrid method (HYB), target registration errors comparable to previously reported methods were achieved, but at much higher speeds and lower failure rates. In simulated datasets, the proposed HYB method achieved a median projected target registration error (pTRE) of 0.33 mm and a mean registration frame-rate of 12.1 hz, while previously published methods produced median pTREs greater than 1.5 mm and mean registration frame-rates less than 4 hz. In clinical datasets, the HYB method achieved a median pTRE of 1.1 mm and a mean registration frame-rate of 20.5 hz, while previously published methods produced median pTREs greater than 1.3 mm and mean registration frame-rates less than 12 hz. The proposed hybrid method also had much lower failure rates than previously published methods. PMID:27179366

  14. Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-06-01

    Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.

  15. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed Central

    Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.

    2016-01-01

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676

  16. Intracranial haemodynamics during vasomotor stress test in unilateral internal carotid artery occlusion estimated by 3-D transcranial Doppler scanner.

    PubMed

    Zbornikova, V; Lassvik, C; Hillman, J

    1995-04-01

    Seventeen patients, 14 males and 3 females, mean age 64 years (range 45-77 years) with longstanding unilateral occlusion of the internal carotid artery and minimal neurological deficit, were evaluated in order to find criteria for potential benefit of extracranial-intracranial by-pass surgery. 3-D transcranial Doppler was used for estimation of mean velocities and pulsatility index in the middle cerebral artery, anterior cerebral artery and posterior cerebral artery before and after iv injection of 1 g acetazolamide. The anterior cerebral artery was the supplying vessel to the occluded side in 16 patients and mean velocities were significantly (p < 0.001) faster on the occluded (59.3 +/- 14.5 cm sec-1) and nonoccluded (91.6 +/- 29.6 cm sec-1, p < 0.05)) side than those found in the middle cerebral artery (39.2 +/- 13.7 and 50.9 +/- 8.5 cm sec-1). In two patients a decrease of mean velocity after acetazolamide was noted in middle cerebral artery indicating 'steal' effect. In another 4 patients, poor vasomotor response was seen with less than 11% of mean velocity increase in the middle cerebral artery. Differences between posterior cerebral artery on the occluded and nonoccluded side were insignificant as well as those between middle and posterior on the occluded side. Resting values of pulsatility index differed significantly (p < 0.01) only between anterior and posterior cerebral artery on the nonoccluded side.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  18. Exposures from indoor spraying of chlorpyrifos pose greater health risks to children than currently estimated.

    PubMed Central

    Davis, D L; Ahmed, A K

    1998-01-01

    Recent findings of indoor exposure studies of chlorpyrifos indicate that young children are at higher risks to the semivolatile pesticide than had been previously estimated [Gurunathan et al., Environ Health Perspect 106:9-16 (1998)]. The study showed that after a single broadcast use of the pesticide by certified applicators in apartment rooms, chlorpyrifos continued to accumulate on children's toys and hard surfaces 2 weeks after spraying. Based on the findings of this and other research studies, the estimated chlorpyrifos exposure levels from indoor spraying for children are approximately 21-119 times above the current recommended reference dose of 3 microg/kg/day from all sources. A joint agreement reached between the U.S. Environmental Protection Agency and the registrants of chlorpyrifos-based products will phase out a number of indoor uses of the pesticide, including broadcast spraying and direct uses on pets. While crack and crevice treatment of insects (such as cockroaches and termites) by chlorpyrifos will still continue, it appears prudent to explore other insect control options, including the use of baits, traps, and insect sterilants and growth regulators. To ensure global protection, adequate dissemination of appropriate safety and regulatory information to developing regions of the world is critical, where importation and local production of chlorpyrifos-based products for indoor uses may be significant. PMID:9618343

  19. Rapid review: Estimates of incremental breast cancer detection from tomosynthesis (3D-mammography) screening in women with dense breasts.

    PubMed

    Houssami, Nehmat; Turner, Robin M

    2016-12-01

    High breast tissue density increases breast cancer (BC) risk, and the risk of an interval BC in mammography screening. Density-tailored screening has mostly used adjunct imaging to screen women with dense breasts, however, the emergence of tomosynthesis (3D-mammography) provides an opportunity to steer density-tailored screening in new directions potentially obviating the need for adjunct imaging. A rapid review (a streamlined evidence synthesis) was performed to summarise data on tomosynthesis screening in women with heterogeneously dense or extremely dense breasts, with the aim of estimating incremental (additional) BC detection attributed to tomosynthesis in comparison with standard 2D-mammography. Meta-analysed data from prospective trials comparing these mammography modalities in the same women (N = 10,188) in predominantly biennial screening showed significant incremental BC detection of 3.9/1000 screens attributable to tomosynthesis (P < 0.001). Studies comparing different groups of women screened with tomosynthesis (N = 103,230) or with 2D-mammography (N = 177,814) yielded a pooled difference in BC detection of 1.4/1000 screens representing significantly higher BC detection in tomosynthesis-screened women (P < 0.001), and a pooled difference for recall of -23.3/1000 screens representing significantly lower recall in tomosynthesis-screened groups (P < 0.001), than for 2D-mammography. These estimates can inform planning of future trials of density-tailored screening and may guide discussion of screening women with dense breasts.

  20. A variational Data Assimilation algorithm to better estimate the salinity for the Berre lagoon with Telemac3D

    NASA Astrophysics Data System (ADS)

    Ricci, S. M.; Piacentini, A.; Riadh, A.; Goutal, N.; Razafindrakoto, E.; Zaoui, F.; Gant, M.; Morel, T.; Duchaine, F.; Thual, O.

    2012-12-01

    The Berre lagoon is a receptacle of 1000Mm3 where salty sea water meets fresh water discharged by the hydroelectric plant at Saint-Chamas and by natural tributaries (Arc and Touloubre rivers). Improving the quality of the simulation of the hydrodynamics of the lagoon with TELEMAC 3D, EDF R&D at LNHE aims at optimizing the operation of the hydroelectric production while preserving the lagoon ecosystem. To do so and in a collaborative framework with CERFACS, a data assimilation (DA) algorithm is being implemented, using the Open-Palm coupler, to make the most of continuous (every 15 min) and in-situ salinity measurements at 4 locations in the lagoon. Preliminary studies were carried out to quantify the difference between a reference simulation and the observations on a test period. It was shown that the model is able to relatively well represent the evolution of the salinity field at the observating stations, given some adjustements on the forcing near Caronte. Still, discrepancies up to several g/l remain and could be corrected with the DA algorithm. Additionally, some numerical features should be fixed to insure the robustness of the code with respect to compiling plateforms and parallel computing. Similarly to the meteorological and oceanographic approaches, the observations are used sequentially to update the hydrodynamical state. More specifically, a 3D-FGAT algorithm is used to correct the salinity state at the beginning of an assimilation window. This variational algorithm lies on the hypothesis that the tangent linear physics can be approximated by a persistent model over a chosen time window. Sensitivity tests on a reference run showed that in order to cope with this constraint, the analysis time window should be at most 3h. For instance, it was show that a local positive salinity increment of 0.5 g/l introduced at -5m is dissipated by the numerical model over 1 day (physical and numerical diffusion mostly) (Figure a). Using an average estimate of the

  1. Driver head pose tracking with thermal camera

    NASA Astrophysics Data System (ADS)

    Bole, S.; Fournier, C.; Lavergne, C.; Druart, G.; Lépine, T.

    2016-09-01

    Head pose can be seen as a coarse estimation of gaze direction. In automotive industry, knowledge about gaze direction could optimize Human-Machine Interface (HMI) and Advanced Driver Assistance Systems (ADAS). Pose estimation systems are often based on camera when applications have to be contactless. In this paper, we explore uncooled thermal imagery (8-14μm) for its intrinsic night vision capabilities and for its invariance versus lighting variations. Two methods are implemented and compared, both are aided by a 3D model of the head. The 3D model, mapped with thermal texture, allows to synthesize a base of 2D projected models, differently oriented and labeled in yaw and pitch. The first method is based on keypoints. Keypoints of models are matched with those of the query image. These sets of matchings, aided with the 3D shape of the model, allow to estimate 3D pose. The second method is a global appearance approach. Among all 2D models of the base, algorithm searches the one which is the closest to the query image thanks to a weighted least squares difference.

  2. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  3. Estimation of environmental capacity of phosphorus in Gorgan Bay, Iran, via a 3D ecological-hydrodynamic model.

    PubMed

    Ranjbar, Mohammad Hassan; Hadjizadeh Zaker, Nasser

    2016-11-01

    Gorgan Bay is a semi-enclosed basin located in the southeast of the Caspian Sea in Iran and is an important marine habitat for fish and seabirds. In the present study, the environmental capacity of phosphorus in Gorgan Bay was estimated using a 3D ecological-hydrodynamic numerical model and a linear programming model. The distribution of phosphorus, simulated by the numerical model, was used as an index for the occurrence of eutrophication and to determine the water quality response field of each of the pollution sources. The linear programming model was used to calculate and allocate the total maximum allowable loads of phosphorus to each of the pollution sources in a way that eutrophication be prevented and at the same time maximum environmental capacity be achieved. In addition, the effect of an artificial inlet on the environmental capacity of the bay was investigated. Observations of surface currents in Gorgan Bay were made by GPS-tracked surface drifters to provide data for calibration and verification of numerical modeling. Drifters were deployed at five different points across the bay over a period of 5 days. The results indicated that the annual environmental capacity of phosphorus is approximately 141 t if a concentration of 0.0477 mg/l for phosphorus is set as the water quality criterion. Creating an artificial inlet with a width of 1 km in the western part of the bay would result in a threefold increase in the environmental capacity of the study area.

  4. Estimation of bisphenol A-Human toxicity by 3D cell culture arrays, high throughput alternatives to animal tests.

    PubMed

    Lee, Dong Woo; Oh, Woo-Yeon; Yi, Sang Hyun; Ku, Bosung; Lee, Moo-Yeal; Cho, Yoon Hee; Yang, Mihi

    2016-09-30

    Bisphenol A (BPA) has been widely used for manufacturing polycarbonate plastics and epoxy resins and has been extensively tested in animals to predict human toxicity. In order to reduce the use of animals for toxicity assessment and provide further accurate information on BPA toxicity in humans, we encapsulated Hep3B human hepatoma cells in alginate and cultured them in three dimensions (3D) on a micropillar chip coupled to a panel of metabolic enzymes on a microwell chip. As a result, we were able to assess the toxicity of BPA under various metabolic enzyme conditions using a high-throughput and micro assay; sample volumes were nearly 2,000 times less than that required for a 96-well plate. We applied a total of 28 different enzymes to each chip, including 10 cytochrome P450s (CYP450s), 10 UDP-glycosyltransferases (UGTs), 3 sulfotransferases (SULTs), alcohol dehydrogenase (ADH), and aldehyde dehydrogenase 2 (ALDH2). Phase I enzyme mixtures, phase II enzyme mixtures, and a combination of phase I and phase II enzymes were also applied to the chip. BPA toxicity was higher in samples containing CYP2E1 than controls, which contained no enzymes (IC50, 184±16μM and 270±25.8μM, respectively, p<0.01). However, BPA-induced toxicity was alleviated in the presence of ADH (IC50, 337±17.9μM), ALDH2 (335±13.9μM), and SULT1E1 (318±17.7μM) (p<0.05). CYP2E1-mediated cytotoxicity was confirmed by quantifying unmetabolized BPA using HPLC/FD. Therefore, we suggest the present micropillar/microwell chip platform as an effective alternative to animal testing for estimating BPA toxicity via human metabolic systems.

  5. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    , EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. Klokočník, J., Kostelecký, J., Eppelbaum, L. and Bezděk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and

  6. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy.

    PubMed

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-03-06

    Many real-time imaging techniques have been developed to localize the target in 3D space or in 2D beam's eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting on average only <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem during 2D BEV tracking.

  7. Estimating the risk of cattle exposure to tuberculosis posed by wild deer relative to badgers in England and Wales.

    PubMed

    Ward, Alastair I; Smith, Graham C; Etherington, Thomas R; Delahay, Richard J

    2009-10-01

    Wild deer populations in Great Britain are expanding in range and probably in numbers, and relatively high prevalence of bovine tuberculosis (bTB, caused by infection with Mycobacterium bovis) in deer occurs locally in parts of southwest England. To evaluate the M. bovis exposure risk posed to cattle by wild deer relative to badgers in England and Wales, we constructed and parameterized a quantitative risk model with the use of information from the literature (on deer densities, activity patterns, bTB epidemiology, and pathology) and contemporary data on deer, cattle, and badger (Meles meles) distribution and abundance. The median relative risk score for each of the four deer species studied--red (Cervus elaphus), fallow (Dama dama), and roe (Capreolus capreolus) deer, and muntjac (Muntiacus reevesi)--was lower than unity (the relative risk set for badgers, the putative main wildlife reservoir of M. bovis in England and Wales). However, the 95th percentiles associated with risk estimates were large, and the upper limits for all four deer species exceeded unity. Although M. bovis exposure risks to cattle from deer at pasture are likely to be lower than those from badgers across most areas of England and Wales where cattle are affected by bTB because these areas coincide with high-density badger populations but not high-density deer populations, we predict the presence of localized areas where relative risks posed by deer may be considerable. Moreover, wherever deer are infected, risks to cattle may be additive to those posed by badgers. There are considerable knowledge gaps associated with bTB in deer, badgers, and cattle, and data available for model parameterization were generally of low quality and high variability, and consequently model output were subject to some uncertainty. Improved estimates of the proportion of time that deer of each species spend at pasture, the likelihood and magnitude of M. bovis excretion, and local badger and deer densities appear

  8. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  9. Estimating 3D variation in active-layer thickness beneath arctic streams using ground-penetrating radar

    USGS Publications Warehouse

    Brosten, T.R.; Bradford, J.H.; McNamara, J.P.; Gooseff, M.N.; Zarnetske, J.P.; Bowden, W.B.; Johnston, M.E.

    2009-01-01

    We acquired three-dimensional (3D) ground-penetrating radar (GPR) data across three stream sites on the North Slope, AK, in August 2005, to investigate the dependence of thaw depth on channel morphology. Data were migrated with mean velocities derived from multi-offset GPR profiles collected across a stream section within each of the 3D survey areas. GPR data interpretations from the alluvial-lined stream site illustrate greater thaw depths beneath riffle and gravel bar features relative to neighboring pool features. The peat-lined stream sites indicate the opposite; greater thaw depths beneath pools and shallower thaw beneath the connecting runs. Results provide detailed 3D geometry of active-layer thaw depths that can support hydrological studies seeking to quantify transport and biogeochemical processes that occur within the hyporheic zone.

  10. Extended Kalman filter-based methods for pose estimation using visual, inertial and magnetic sensors: comparative analysis and performance evaluation.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2013-02-04

    In this paper measurements from a monocular vision system are fused with inertial/magnetic measurements from an Inertial Measurement Unit (IMU) rigidly connected to the camera. Two Extended Kalman filters (EKFs) were developed to estimate the pose of the IMU/camera sensor moving relative to a rigid scene (ego-motion), based on a set of fiducials. The two filters were identical as for the state equation and the measurement equations of the inertial/magnetic sensors. The DLT-based EKF exploited visual estimates of the ego-motion using a variant of the Direct Linear Transformation (DLT) method; the error-driven EKF exploited pseudo-measurements based on the projection errors from measured two-dimensional point features to the corresponding three-dimensional fiducials. The two filters were off-line analyzed in different experimental conditions and compared to a purely IMU-based EKF used for estimating the orientation of the IMU/camera sensor. The DLT-based EKF was more accurate than the error-driven EKF, less robust against loss of visual features, and equivalent in terms of computational complexity. Orientation root mean square errors (RMSEs) of 1° (1.5°), and position RMSEs of 3.5 mm (10 mm) were achieved in our experiments by the DLT-based EKF (error-driven EKF); by contrast, orientation RMSEs of 1.6° were achieved by the purely IMU-based EKF.

  11. Extended Kalman Filter-Based Methods for Pose Estimation Using Visual, Inertial and Magnetic Sensors: Comparative Analysis and Performance Evaluation

    PubMed Central

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2013-01-01

    In this paper measurements from a monocular vision system are fused with inertial/magnetic measurements from an Inertial Measurement Unit (IMU) rigidly connected to the camera. Two Extended Kalman filters (EKFs) were developed to estimate the pose of the IMU/camera sensor moving relative to a rigid scene (ego-motion), based on a set of fiducials. The two filters were identical as for the state equation and the measurement equations of the inertial/magnetic sensors. The DLT-based EKF exploited visual estimates of the ego-motion using a variant of the Direct Linear Transformation (DLT) method; the error-driven EKF exploited pseudo-measurements based on the projection errors from measured two-dimensional point features to the corresponding three-dimensional fiducials. The two filters were off-line analyzed in different experimental conditions and compared to a purely IMU-based EKF used for estimating the orientation of the IMU/camera sensor. The DLT-based EKF was more accurate than the error-driven EKF, less robust against loss of visual features, and equivalent in terms of computational complexity. Orientation root mean square errors (RMSEs) of 1° (1.5°), and position RMSEs of 3.5 mm (10 mm) were achieved in our experiments by the DLT-based EKF (error-driven EKF); by contrast, orientation RMSEs of 1.6° were achieved by the purely IMU-based EKF. PMID:23385409

  12. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust...provided to estimate background motion for optical flow background subtraction. The experiments with the static background showed minute benefit in

  13. Large-scale three-dimensional measurement via combining 3D scanner and laser rangefinder.

    PubMed

    Shi, Jinlong; Sun, Zhengxing; Bai, Suqin

    2015-04-01

    This paper presents a three-dimensional (3D) measurement method of large-scale objects by integrating a 3D scanner and a laser rangefinder. The 3D scanner, used to perform partial section measurement, is fixed on a robotic arm which can slide on a guide rail. The laser rangefinder, used to compute poses of the 3D scanner, is rigidly connected to the 3D scanner. During large-scale measurement, after measuring a partial section, the 3D scanner is straightly moved forward along the guide rail to measure another section. Meanwhile, the poses of the 3D scanner are estimated according to its moved distance for different partial section alignments. The performance and effectiveness are evaluated by experiments.

  14. Mechanistic and quantitative studies of bystander response in 3D tissues for low-dose radiation risk estimations

    SciTech Connect

    Amundson, Sally A.

    2013-06-12

    We have used the MatTek 3-dimensional human skin model to study the gene expression response of a 3D model to low and high dose low LET radiation, and to study the radiation bystander effect as a function of distance from the site of irradiation with either alpha particles or low LET protons. We have found response pathways that appear to be specific for low dose exposures, that could not have been predicted from high dose studies. We also report the time and distance dependent expression of a large number of genes in bystander tissue. the bystander response in 3D tissues showed many similarities to that described previously in 2D cultured cells, but also showed some differences.

  15. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  16. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  17. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.

  18. Evaluation of 1D, 2D and 3D nodule size estimation by radiologists for spherical and non-spherical nodules through CT thoracic phantom imaging

    NASA Astrophysics Data System (ADS)

    Petrick, Nicholas; Kim, Hyun J. Grace; Clunie, David; Borradaile, Kristin; Ford, Robert; Zeng, Rongping; Gavrielides, Marios A.; McNitt-Gray, Michael F.; Fenimore, Charles; Lu, Z. Q. John; Zhao, Binsheng; Buckler, Andrew J.

    2011-03-01

    The purpose of this work was to estimate bias in measuring the size of spherical and non-spherical lesions by radiologists using three sizing techniques under a variety of simulated lesion and reconstruction slice thickness conditions. We designed a reader study in which six radiologists estimated the size of 10 synthetic nodules of various sizes, shapes and densities embedded within a realistic anthropomorphic thorax phantom from CT scan data. In this manuscript we report preliminary results for the first four readers (Reader 1-4). Two repeat CT scans of the phantom containing each nodule were acquired using a Philips 16-slice scanner at a 0.8 and 5 mm slice thickness. The readers measured the sizes of all nodules for each of the 40 resulting scans (10 nodules x 2 slice thickness x 2 repeat scans) using three sizing techniques (1D longest in-slice dimension; 2D area from longest in-slice dimension and corresponding longest perpendicular dimension; 3D semi-automated volume) in each of 2 reading sessions. The normalized size was estimated for each sizing method and an inter-comparison of bias among methods was performed. The overall relative biases (standard deviation) of the 1D, 2D and 3D methods for the four readers subset (Readers 1-4) were -13.4 (20.3), -15.3 (28.4) and 4.8 (21.2) percentage points, respectively. The relative biases for the 3D volume sizing method was statistically lower than either the 1D or 2D method (p<0.001 for 1D vs. 3D and 2D vs. 3D).

  19. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  20. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  1. Structure-From-Motion in 3D Space Using 2D Lidars

    PubMed Central

    Choi, Dong-Geol; Bok, Yunsu; Kim, Jun-Sik; Shim, Inwook; Kweon, In So

    2017-01-01

    This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method. PMID:28165372

  2. Reconstruction of the ionospheric 3D electron density distribution by assimilation of ionosonde measurements and operational TEC estimations

    NASA Astrophysics Data System (ADS)

    Gerzen, Tatjana; Wilken, Volker; Jakowski, Norbert; Hoque, Mainul M.

    2013-04-01

    New methods to generate maps of the F2 layer peak electron density of the ionosphere (NmF2) and to reconstruct the ionospheric 3D electron density distribution will be presented. For validation, reconstructed NmF2 maps will be compared with peak electron density measurements from independent ionosonde stations. The ionosphere is the ionized part of the upper Earth's atmosphere lying between about 50 km and 1000 km above the Earth's surface. From the applications perspective the electron density, Ne, is certainly one of the most important parameters of the ionosphere because of its strong impact on radio signal propagation. Especially the critical frequency, foF2, which is related to the F2 layer peak electron density, NmF2, according to the equation NmF2-m3 = 1.24 ? 1010(foF2-MHz)2 and builds the lower limit for the maximum usable frequency MUF, is of particular interest with regard to the HF radio communication applications. In a first order approximation the ionospheric delay of transionospheric radio waves of frequency f is proportional to 1-f2 and to the integral of the electron density (total electron content - TEC) along the ray path. Thus, the information about the total electron content along the receiver-to-satellite ray path can be obtained from the dual frequency measurements permanently transmitted by GNSS satellites. As data base for our reconstruction approaches we use the vertical sounding measurements of the ionosonde stations providing foF2 and routinely generated TEC maps in SWACI (http://swaciweb.dlr.de) at DLR Neustrelitz. The basic concept of our approach is the following one: To reconstruct NmF2 maps we assimilate the ionosonde data into the global Neustrelitz F2 layer Peak electron Density Model (NPDM) by means of a successive corrections method. The TEC maps are produced by assimilating actual ground based GPS measurements providing TEC into an operational version of Neustrelitz TEC Model (NTCM). Finally, the derived NmF2 and TEC maps in

  3. Simultaneous estimation of size, radial and angular locations of a malignant tumor in a 3-D human breast - A numerical study.

    PubMed

    Das, Koushik; Mishra, Subhash C

    2015-08-01

    This article reports a numerical study pertaining to simultaneous estimation of size, radial location and angular location of a malignant tumor in a 3-D human breast. The breast skin surface temperature profile is specific to a tumor of specific size and location. The temperature profiles are always the Gaussian one, though their peak magnitudes and areas differ according to the size and location of the tumor. The temperature profiles are obtained by solving the Pennes bioheat equation using the finite element method based solver COMSOL 4.3a. With temperature profiles known, simultaneous estimation of size, radial location and angular location of the tumor is done using the curve fitting method. Effect of measurement errors is also included in the study. Estimations are accurate, and since in the inverse analysis, the curve fitting method does not require solution of the governing bioheat equation, the estimation is very fast.

  4. The 2D versus 3D imaging trade-off: The impact of over- or under-estimating small throats for simulating permeability in porous media

    NASA Astrophysics Data System (ADS)

    Peters, C. A.; Crandell, L. E.; Um, W.; Jones, K. W.; Lindquist, W. B.

    2011-12-01

    Geochemical reactions in the subsurface can alter the porosity and permeability of a porous medium through mineral precipitation and dissolution. While effects on porosity are relatively well understood, changes in permeability are more difficult to estimate. In this work, pore-network modeling is used to estimate the permeability of a porous medium using pore and throat size distributions. These distributions can be determined from 2D Scanning Electron Microscopy (SEM) images of thin sections or from 3D X-ray Computed Tomography (CT) images of small cores. Each method has unique advantages as well as unique sources of error. 3D CT imaging has the advantage of reconstructing a 3D pore network without the inherent geometry-based biases of 2D images but is limited by resolutions around 1 μm. 2D SEM imaging has the advantage of higher resolution, and the ability to examine sub-grain scale variations in porosity and mineralogy, but is limited by the small size of the sample of pores that are quantified. A pore network model was created to estimate flow permeability in a sand-packed experimental column investigating reaction of sediments with caustic radioactive tank wastes in the context of the Hanford, WA site. Before, periodically during, and after reaction, 3D images of the porous medium in the column were produced using the X2B beam line facility at the National Synchrotron Light Source (NSLS) at Brookhaven National Lab. These images were interpreted using 3DMA-Rock to characterize the pore and throat size distributions. After completion of the experiment, the column was sectioned and imaged using 2D SEM in backscattered electron mode. The 2D images were interpreted using erosion-dilation to estimate the pore and throat size distributions. A bias correction was determined by comparison with the 3D image data. A special image processing method was developed to infer the pore space before reaction by digitally removing the precipitate. The different sets of pore

  5. Estimation of local stresses and elastic properties of a mortar sample by FFT computation of fields on a 3D image

    SciTech Connect

    Escoda, J.; Willot, F.; Jeulin, D.; Sanahuja, J.; Toulemonde, C.

    2011-05-15

    This study concerns the prediction of the elastic properties of a 3D mortar image, obtained by micro-tomography, using a combined image segmentation and numerical homogenization approach. The microstructure is obtained by segmentation of the 3D image into aggregates, voids and cement paste. Full-fields computations of the elastic response of mortar are undertaken using the Fast Fourier Transform method. Emphasis is made on highly-contrasted properties between aggregates and matrix, to anticipate needs for creep or damage computation. The representative volume element, i.e. the volume size necessary to compute the effective properties with a prescribed accuracy, is given. Overall, the volumes used in this work were sufficient to estimate the effective response of mortar with a precision of 5%, 6% and 10% for contrast ratios of 100, 1000 and 10,000, resp. Finally, a statistical and local characterization of the component of the stress field parallel to the applied loading is carried out.

  6. Quantitative estimation of 3-D fiber course in gross histological sections of the human brain using polarized light.

    PubMed

    Axer, H; Axer, M; Krings, T; Keyserlingk, D G

    2001-02-15

    Series of polarized light images can be used to achieve quantitative estimates of the angles of inclination (z-direction) and direction (in xy-plane) of central nervous fibers in histological sections of the human brain. (1) The corpus callosum of a formalin-fixed human brain was sectioned at different angles of inclination of nerve fibers and at different thicknesses of the samples. The minimum, and maximum intensities, and their differences revealed a linear relationship to the angle of inclination of fibers. It was demonstrated that sections with a thickness of 80--120 microm are best suited for estimating the angle of inclination. (2) Afterwards the optic tracts of eight formalin-fixed human brains were sliced at different angles of fiber inclination at 100 microm. Measurements of intensity in 30 pixels in each section were used to calculate a linear function of calibration. The maximum intensities and the differences between maximum and minimum values measured with two polars only were best suited for estimation of fiber inclination. (3) Gross histological brain slices of formalin-fixed human brains were digitized under azimuths from 0 to 80 degrees using two polars only. These sequences were used to estimate the inclination of fibers (in z-direction). The same slices were digitized under azimuths from 0 to 160 degrees in steps of 20 degrees using a quarter wave plate additionally. These sequences were used to estimate the direction of the fibers in xy-direction. The method can be used to produce maps of fiber orientation in gross histological sections of the human brain similar to the fiber orientation maps derived by diffusion weighted magnetic resonance imaging.

  7. 3d morphometric analysis of lunar impact craters: a tool for degradation estimates and interpretation of maria stratigraphy

    NASA Astrophysics Data System (ADS)

    Vivaldi, Valerio; Massironi, Matteo; Ninfo, Andrea; Cremonese, Gabriele

    2015-04-01

    In this study we have applied 3D morphometric analysis of impact craters on the Moon by means of high resolution DTMs derived from LROC (Lunar Reconnaissance Orbiter Camera) NAC (Narrow Angle Camera) (0.5 to 1.5 m/pixel). The objective is twofold: i) evaluating crater degradation and ii) exploring the potential of this approach for Maria stratigraphic interpretation. In relation to the first objective we have considered several craters with different diameters representative of the four classes of degradation being C1 the freshest and C4 the most degraded ones (Arthur et al., 1963; Wilhelms, 1987). DTMs of these craters were elaborated according to a multiscalar approach (Wood, 1996) by testing different ranges of kernel sizes (e.g. 15-35-50-75-100), in order to retrieve morphometric variables such as slope, curvatures and openness. In particular, curvatures were calculated along different planes (e.g. profile curvature and plan curvature) and used to characterize the different sectors of a crater (rim crest, floor, internal slope and related boundaries) enabling us to evaluate its degradation. The gradient of the internal slope of different craters representative of the four classes shows a decrease of the slope mean value from C1 to C4 in relation to crater age and diameter. Indeed degradation is influenced by gravitational processes (landslides, dry flows), as well as space weathering that induces both smoothing effects on the morphologies and infilling processes within the crater, with the main results of lowering and enlarging the rim crest, and shallowing the crater depth. As far as the stratigraphic application is concerned, morphometric analysis was applied to recognize morphologic features within some simple craters, in order to understand the stratigraphic relationships among different lava layers within Mare Serenitatis. A clear-cut rheological boundary at a depth of 200 m within the small fresh Linnè crater (diameter: 2.22 km), firstly hypothesized

  8. Dental wear estimation using a digital intra-oral optical scanner and an automated 3D computer vision method.

    PubMed

    Meireles, Agnes Batista; Vieira, Antonio Wilson; Corpas, Livia; Vandenberghe, Bart; Bastos, Flavia Souza; Lambrechts, Paul; Campos, Mario Montenegro; Las Casas, Estevam Barbosa de

    2016-01-01

    The objective of this work was to propose an automated and direct process to grade tooth wear intra-orally. Eight extracted teeth were etched with acid for different times to produce wear and scanned with an intra-oral optical scanner. Computer vision algorithms were used for alignment and comparison among models. Wear volume was estimated and visual scoring was achieved to determine reliability. Results demonstrated that it is possible to directly detect submillimeter differences in teeth surfaces with an automated method with results similar to those obtained by direct visual inspection. The investigated method proved to be reliable for comparison of measurements over time.

  9. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  10. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yun, G. S.; Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  11. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system.

    PubMed

    Lee, J; Yun, G S; Lee, J E; Kim, M; Choi, M J; Lee, W; Park, H K; Domier, C W; Luhmann, N C; Sabbagh, S A; Park, Y S; Lee, S G; Bak, J G

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  12. Virtual forensic entomology: improving estimates of minimum post-mortem interval with 3D micro-computed tomography.

    PubMed

    Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina

    2012-07-10

    We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected.

  13. 3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study

    NASA Astrophysics Data System (ADS)

    Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.

    2015-03-01

    Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.

  14. In vitro quantification of the performance of model-based mono-planar and bi-planar fluoroscopy for 3D joint kinematics estimation.

    PubMed

    Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita

    2013-03-01

    Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.

  15. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load.

  16. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  17. High-resolution 3D seismic reflection imaging across active faults and its impact on seismic hazard estimation in the Tokyo metropolitan area

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tatsuya; Sato, Hiroshi; Abe, Susumu; Kawasaki, Shinji; Kato, Naoko

    2016-10-01

    We collected and interpreted high-resolution 3D seismic reflection data across a hypothesized fault scarp, along the largest active fault that could generate hazardous earthquakes in the Tokyo metropolitan area. The processed and interpreted 3D seismic cube, linked with nearby borehole stratigraphy, suggests that a monocline that deforms lower Pleistocene units is unconformably overlain by middle Pleistocene conglomerates. Judging from structural patterns and vertical separation on the lower-middle Pleistocene units and the ground surface, the hypothesized scarp was interpreted as a terrace riser rather than as a manifestation of late Pleistocene structural growth resulting from repeated fault activity. Devastating earthquake scenarios had been predicted along the fault in question based on its proximity to the metropolitan area, however our new results lead to a significant decrease in estimated fault length and consequently in the estimated magnitude of future earthquakes associated with reactivation. This suggests a greatly reduced seismic hazard in the Tokyo metropolitan area from earthquakes generated by active intraplate crustal faults.

  18. Estimation of gas-hydrate distribution from 3-D seismic data in a small area of the Ulleung Basin, East Sea

    NASA Astrophysics Data System (ADS)

    Yi, Bo-Yeon; Kang, Nyeon-Keon; Yoo, Dong-Geun; Lee, Gwang-Hoon

    2014-05-01

    We estimated the gas-hydrate resource in a small (5 km x 5 km) area of the Ulleung Basin, East Sea from 3-D seismic and well-log data together with core measurement data, using seismic inversion and multi-attribute transform techniques. Multi-attribute transform technique finds the relationship between measured logs and the combination of the seismic attributes and various post-stack and pre-stack attributes computed from inversion. First, the gas-hydrate saturation and S-wave velocity at the wells were estimated from the simplified three-phase Biot-type equation (STPBE). The core X-ray diffraction data were used to compute the elastic properties of solid components of sediment, which are the key input parameters to the STPBE. Next, simultaneous pre-stack inversion was carried out to obtain P-wave impedance, S-wave impedance, density and lambda-mu-rho attributes. Then, the porosity and gas-hydrate saturation of 3-D seismic volume were predicted from multi-attribute transform. Finally, the gas-hydrate resource was computed by the multiplication of the porosity and gas-hydrate saturation volumes.

  19. 3D Hand Pose Reconstruction Using Specialized Mappings

    DTIC Science & Technology

    2001-04-01

    specialized functions. Our algorithm could be used as a front end in several gesture recognition applications that take the hand config- uration as...interpretation of real- time optical flow for gesture recognition . In Face and Ges- ture Recognition, pages 416–421, 1998. [4] T.J. Darrell, I.A. Essa, and...regression splines. The Annals of Statistics, 19,1-141, 1991. [7] M. Fröhlich and I. Wachsmuth. Gesture recognition of the upper limbs : From signal

  20. 3-D Object Pose Determination Using Complex EGI

    DTIC Science & Technology

    1990-10-01

    IKEG1 ji = 0. . .. 12 4.1 Tesselated pentakis dodecahedron ..... ....................... 19 4.2 First composite object used for testing... dodecahedron (tesselated pentakis dodecahedron ) as shown in Fig. 4.1. The normal direction space is discretized into 240 cells as well. The CEGI weights are...deviation of the error distribution.) 18 Figure 4. 1: Tesselated pentakis dodecahedron Figure 4.2: First composite object used for testing 19 Figure

  1. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  2. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  3. Estimation of Effective Transmission Loss Due to Subtropical Hydrometeor Scatters using a 3D Rain Cell Model for Centimeter and Millimeter Wave Applications

    NASA Astrophysics Data System (ADS)

    Ojo, J. S.; Owolawi, P. A.

    2014-12-01

    The problem of hydrometeor scattering on microwave radio communication down links continues to be of interest as the number of the ground and earth space terminals continually grows The interference resulting from the hydrometeor scattering usually leads to the reduction in the signal-to-noise ratio ( SNR) at the affected terminal and at worst can even end up in total link outage. In this paper, an attempt has been made to compute the effective transmission loss due to subtropical hydrometeors on vertically polarized signals in Earth-satellite propagation paths in the Ku, Ka and V band frequencies based on the modified Capsoni 3D rain cell model. The 3D rain cell model has been adopted and modified using the subtropical log-normal distributions of raindrop sizes and introducing the equivalent path length through rain in the estimation of the attenuation instead of the usual specific attenuation in order to account for the attenuation of both wanted and unwanted paths to the receiver. The co-channels, interference at the same frequency is very prone to the higher amount of unwanted signal at the elevation considered. The importance of joint transmission is also considered.

  4. Three-dimensional (3D) coseismic deformation map produced by the 2014 South Napa Earthquake estimated and modeled by SAR and GPS data integration

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Albano, Matteo; Fernández, José; Palano, Mimmo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2016-04-01

    In this work we present a 3D map of coseismic displacements due to the 2014 Mw 6.0 South Napa earthquake, California, obtained by integrating displacement information data from SAR Interferometry (InSAR), Multiple Aperture Interferometry (MAI), Pixel Offset Tracking (POT) and GPS data acquired by both permanent stations and campaigns sites. This seismic event produced significant surface deformation along the 3D components causing several damages to vineyards, roads and houses. The remote sensing results, i.e. InSAR, MAI and POT, were obtained from the pair of SAR images provided by the Sentinel-1 satellite, launched on April 3rd, 2014. They were acquired on August 7th and 31st along descending orbits with an incidence angle of about 23°. The GPS dataset includes measurements from 32 stations belonging to the Bay Area Regional Deformation Network (BARDN), 301 continuous stations available from the UNAVCO and the CDDIS archives, and 13 additional campaign sites from Barnhart et al, 2014 [1]. These data constrain the horizontal and vertical displacement components proving to be helpful for the adopted integration method. We exploit the Bayes theory to search for the 3D coseismic displacement components. In particular, for each point, we construct an energy function and solve the problem to find a global minimum. Experimental results are consistent with a strike-slip fault mechanism with an approximately NW-SE fault plane. Indeed, the 3D displacement map shows a strong North-South (NS) component, peaking at about 15 cm, a few kilometers far from the epicenter. The East-West (EW) displacement component reaches its maximum (~10 cm) south of the city of Napa, whereas the vertical one (UP) is smaller, although a subsidence in the order of 8 cm on the east side of the fault can be observed. A source modelling was performed by inverting the estimated displacement components. The best fitting model is given by a ~N330° E-oriented and ~70° dipping fault with a prevailing

  5. Fully automated 2D-3D registration and verification.

    PubMed

    Varnavas, Andreas; Carrell, Tom; Penney, Graeme

    2015-12-01

    Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range.

  6. Robust extrapolation scheme for fast estimation of 3D ising field partition functions: application to within-subject fMRI data analysis.

    PubMed

    Risser, Laurent; Vincent, Thomas; Ciuciu, Philippe; Idier, Jérôme

    2009-01-01

    In this paper, we present a fast numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to prespecified regions. Then, the proposed extrapolation method allows us to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible.

  7. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  8. Volume estimation of rift-related magmatic features using seismic interpretation and 3D inversion of gravity data on the Guinea Plateau, West Africa

    NASA Astrophysics Data System (ADS)

    Kardell, Dominik A.

    The two end-member concept of mantle plume-driven versus far field stress-driven continental rifting anticipates high volumes of magma emplaced close to the rift-initiating plume, whereas relatively low magmatic volumes are predicted at large distances from the plume where the rifting is thought to be driven by far field stresses. We test this concept at the Guinea Plateau, which represents the last area of separation between Africa and South America, by investigating for rift-related volumes of magmatism using borehole, 3D seismic, and gravity data to run structural 3D inversions in two different data areas. Despite our interpretation of igneous rocks spanning large areas of continental shelf covered by the available seismic surveys, the calculated volumes in the Guinea Plateau barely match the magmatic volumes of other magma-poor margins and thus endorse the aforementioned concept. While the volcanic units on the shelf seem to be characterized more dominantly by horizontally deposited extrusive volcanic flows distributed over larger areas, numerous paleo-seamounts pierce complexly deformed pre and syn-rift sedimentary units on the slope. As non-uniqueness is an omnipresent issue when using potential field data to model geologic features, our method faced some challenges in the areas exhibiting complicated geology. In this situation less rigid constraints were applied in the modeling process. The misfit issues were successfully addressed by filtering the frequency content of the gravity data according to the depth of the investigated geology. In this work, we classify and compare our volume estimates for rift-related magmatism between the Guinea Fracture Zone (FZ) and the Saint Paul's FZ while presenting the refinements applied to our modeling technique.

  9. Application of the H/V and SPAC Method to Estimate a 3D Shear Wave Velocity Model, in the City of Coatzacoalcos, Veracruz.

    NASA Astrophysics Data System (ADS)

    Morales, L. E. A. P.; Aguirre, J.; Vazquez Rosas, R.; Suarez, G.; Contreras Ruiz-Esparza, M. G.; Farraz, I.

    2014-12-01

    Methods that use seismic noise or microtremors have become very useful tools worldwide due to its low costs, the relative simplicity in collecting data, the fact that these are non-invasive methods hence there is no need to alter or even perforate the study site, and also these methods require a relatively simple analysis procedure. Nevertheless the geological structures estimated by this methods are assumed to be parallel, isotropic and homogeneous layers. Consequently precision of the estimated structure is lower than that from conventional seismic methods. In the light of these facts this study aimed towards searching a new way to interpret the results obtained from seismic noise methods. In this study, seven triangular SPAC (Aki, 1957) arrays were performed in the city of Coatzacoalcos, Veracruz, varying in sizes from 10 to 100 meters. From the autocorrelation between the stations of each array, a Rayleigh wave phase velocity dispersion curve was calculated. Such dispersion curve was used to obtain a S wave parallel layers velocity (VS) structure for the study site. Subsequently the horizontal to vertical ratio of the spectrum of microtremors H/V (Nogoshi and Igarashi, 1971; Nakamura, 1989, 2000) was calculated for each vertex of the SPAC triangular arrays, and from the H/V spectrum the fundamental frequency was estimated for each vertex. By using the H/V spectral ratio curves interpreted as a proxy to the Rayleigh wave ellipticity curve, a series of VS structures were inverted for each vertex of the SPAC array. Lastly each VS structure was employed to calculate a 3D velocity model, in which the exploration depth was approximately 100 meters, and had a velocity range in between 206 (m/s) to 920 (m/s). The 3D model revealed a thinning of the low velocity layers. This proved to be in good agreement with the variation of the fundamental frequencies observed at each vertex. With the previous kind of analysis a preliminary model can be obtained as a first

  10. 3D vision assisted flexible robotic assembly of machine components

    NASA Astrophysics Data System (ADS)

    Ogun, Philips S.; Usman, Zahid; Dharmaraj, Karthick; Jackson, Michael R.

    2015-12-01

    Robotic assembly systems either make use of expensive fixtures to hold components in predefined locations, or the poses of the components are determined using various machine vision techniques. Vision-guided assembly robots can handle subtle variations in geometries and poses of parts. Therefore, they provide greater flexibility than the use of fixtures. However, the currently established vision-guided assembly systems use 2D vision, which is limited to three degrees of freedom. The work reported in this paper is focused on flexible automated assembly of clearance fit machine components using 3D vision. The recognition and the estimation of the poses of the components are achieved by matching their CAD models with the acquired point cloud data of the scene. Experimental results obtained from a robot demonstrating the assembly of a set of rings on a shaft show that the developed system is not only reliable and accurate, but also fast enough for industrial deployment.

  11. Robust 3D reconstruction with an RGB-D camera.

    PubMed

    Wang, Kangkan; Zhang, Guofeng; Bao, Hujun

    2014-11-01

    We present a novel 3D reconstruction approach using a low-cost RGB-D camera such as Microsoft Kinect. Compared with previous methods, our scanning system can work well in challenging cases where there are large repeated textures and significant depth missing problems. For robust registration, we propose to utilize both visual and geometry features and combine SFM technique to enhance the robustness of feature matching and camera pose estimation. In addition, a novel prior-based multicandidates RANSAC is introduced to efficiently estimate the model parameters and significantly speed up the camera pose estimation under multiple correspondence candidates. Even when serious depth missing occurs, our method still can successfully register all frames together. Loop closure also can be robustly detected and handled to eliminate the drift problem. The missing geometry can be completed by combining multiview stereo and mesh deformation techniques. A variety of challenging examples demonstrate the effectiveness of the proposed approach.

  12. A 3D lower limb musculoskeletal model for simultaneous estimation of musculo-tendon, joint contact, ligament and bone forces during gait.

    PubMed

    Moissenet, Florent; Chèze, Laurence; Dumas, Raphaël

    2014-01-03

    Musculo-tendon forces and joint reaction forces are typically estimated using a two-step method, computing first the musculo-tendon forces by a static optimization procedure and then deducing the joint reaction forces from the force equilibrium. However, this method does not allow studying the interactions between musculo-tendon forces and joint reaction forces in establishing this equilibrium and the joint reaction forces are usually overestimated. This study introduces a new 3D lower limb musculoskeletal model based on a one-step static optimization procedure allowing simultaneous musculo-tendon, joint contact, ligament and bone forces estimation during gait. It is postulated that this approach, by giving access to the forces transmitted by these musculoskeletal structures at hip, tibiofemoral, patellofemoral and ankle joints, modeled using anatomically consistent kinematic models, should ease the validation of the model using joint contact forces measured with instrumented prostheses. A blinded validation based on four datasets was made under two different minimization conditions (i.e., C1 - only musculo-tendon forces are minimized, and C2 - musculo-tendon, joint contact, ligament and bone forces are minimized while focusing more specifically on tibiofemoral joint contacts). The results show that the model is able to estimate in most cases the correct timing of musculo-tendon forces during normal gait (i.e., the mean coefficient of active/inactive state concordance between estimated musculo-tendon force and measured EMG envelopes was C1: 65.87% and C2: 60.46%). The results also showed that the model is potentially able to well estimate joint contact, ligament and bone forces and more specifically medial (i.e., the mean RMSE between estimated joint contact force and in vivo measurement was C1: 1.14BW and C2: 0.39BW) and lateral (i.e., C1: 0.65BW and C2: 0.28BW) tibiofemoral contact forces during normal gait. However, the results remain highly influenced by the

  13. 3D Transient Hydraulic Tomography (3DTHT): An Efficient Field and Modeling Method for High-Resolution Estimation of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.

    2012-12-01

    The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3

  14. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  15. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  16. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  17. Different scenarios for inverse estimation of soil hydraulic parameters from double-ring infiltrometer data using HYDRUS-2D/3D

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Parisa; Ghorbani-Dashtaki, Shoja; Mosaddeghi, Mohammad Reza; Shirani, Hossein; Nodoushan, Ali Reza Mohammadi

    2016-04-01

    In this study, HYDRUS-2D/3D was used to simulate ponded infiltration through double-ring infiltrometers into a hypothetical loamy soil profile. Twelve scenarios of inverse modelling (divided into three groups) were considered for estimation of Mualem-van Genuchten hydraulic parameters. In the first group, simulation was carried out solely using cumulative infiltration data. In the second group, cumulative infiltration data plus water content at h = -330 cm (field capacity) were used as inputs. In the third group, cumulative infiltration data plus water contents at h = -330 cm (field capacity) and h = -15 000 cm (permanent wilting point) were used simultaneously as predictors. The results showed that numerical inverse modelling of the double-ring infiltrometer data provided a reliable alternative method for determining soil hydraulic parameters. The results also indicated that by reducing the number of hydraulic parameters involved in the optimization process, the simulation error is reduced. The best one in infiltration simulation which parameters α, n, and Ks were optimized using the infiltration data and field capacity as inputs. Including field capacity as additional data was important for better optimization/definition of soil hydraulic functions, but using field capacity and permanent wilting point simultaneously as additional data increased the simulation error.

  18. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  19. An automated image-based method of 3D subject-specific body segment parameter estimation for kinetic analyses of rapid movements.

    PubMed

    Sheets, Alison L; Corazza, Stefano; Andriacchi, Thomas P

    2010-01-01

    Accurate subject-specific body segment parameters (BSPs) are necessary to perform kinetic analyses of human movements with large accelerations, or no external contact forces or moments. A new automated topographical image-based method of estimating segment mass, center of mass (CM) position, and moments of inertia is presented. Body geometry and volume were measured using a laser scanner, then an automated pose and shape registration algorithm segmented the scanned body surface, and identified joint center (JC) positions. Assuming the constant segment densities of Dempster, thigh and shank masses, CM locations, and moments of inertia were estimated for four male subjects with body mass indexes (BMIs) of 19.7-38.2. The subject-specific BSP were compared with those determined using Dempster and Clauser regression equations. The influence of BSP and BMI differences on knee and hip net forces and moments during a running swing phase were quantified for the subjects with the smallest and largest BMIs. Subject-specific BSP for 15 body segments were quickly calculated using the image-based method, and total subject masses were overestimated by 1.7-2.9%.When compared with the Dempster and Clauser methods, image-based and regression estimated thigh BSP varied more than the shank parameters. Thigh masses and hip JC to thigh CM distances were consistently larger, and each transverse moment of inertia was smaller using the image-based method. Because the shank had larger linear and angular accelerations than the thigh during the running swing phase, shank BSP differences had a larger effect on calculated intersegmental forces and moments at the knee joint than thigh BSP differences did at the hip. It was the net knee kinetic differences caused by the shank BSP differences that were the largest contributors to the hip variations. Finally, BSP differences produced larger kinetic differences for the subject with larger segment masses, suggesting that parameter accuracy is more

  20. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  1. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis

    PubMed Central

    Menéndez-González, Manuel; Salas-Pacheco, José M.; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the “yearly rate of Relative Thalamic Atrophy” (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  2. Accurate three-dimensional pose recognition from monocular images using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Diaz-Ramirez, Victor H.; Kober, Vitaly; Montemayor, Antonio S.; Pantrigo, Juan J.

    2016-06-01

    An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. Pose-Invariant Face Recognition via RGB-D Images

    PubMed Central

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  5. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  6. Estimating a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head from a commercial OCT device

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.

    2016-03-01

    The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.

  7. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  8. Spatial distribution of hydrocarbon reservoirs in the West Korea Bay Basin in the northern part of the Yellow Sea, estimated by 3-D gravity forward modelling

    NASA Astrophysics Data System (ADS)

    Choi, Sungchan; Ryu, In-Chang; Götze, H.-J.; Chae, Y.

    2017-01-01

    Although an amount of hydrocarbon has been discovered in the West Korea Bay Basin (WKBB), located in the North Korean offshore area, geophysical investigations associated with these hydrocarbon reservoirs are not permitted because of the current geopolitical situation. Interpretation of satellite-derived potential field data can be alternatively used to image the 3-D density distribution in the sedimentary basin associated with hydrocarbon deposits. We interpreted the TRIDENT satellite-derived gravity field data to provide detailed insights into the spatial distribution of sedimentary density structures in the WKBB. We used 3-D forward density modelling for the interpretation that incorporated constraints from existing geological and geophysical information. The gravity data interpretation and the 3-D forward modelling showed that there are two modelled areas in the central subbasin that are characterized by very low density structures, with a maximum density of about 2000 kg m-3, indicating some type of hydrocarbon reservoir. One of the anticipated hydrocarbon reservoirs is located in the southern part of the central subbasin with a volume of about 250 km3 at a depth of about 3000 m in the Cretaceous/Jurassic layer. The other hydrocarbon reservoir should exist in the northern part of the central subbasin, with an average volume of about 300 km3 at a depth of about 2500 m.

  9. Spatial distribution of Hydrocarbon Reservoirs in the West Korea Bay Basin in the northern part of the Yellow Sea, estimated by 3D gravity forward modeling

    NASA Astrophysics Data System (ADS)

    Choi, Sungchan; Ryu, In-Chang; Götze, H.-J.; Chae, Y.

    2016-10-01

    Although an amount of hydrocarbon has been discovered in the West Korea Bay Basin (WKBB), located in the North Korean offshore area, geophysical investigations associated with these hydrocarbon reservoirs are not permitted because of the current geopolitical situation. Interpretation of satellite- derived potential field data can be alternatively used to image the three-dimensional (3D) density distribution in the sedimentary basin associated with hydrocarbon deposits. We interpreted the TRIDENT satellite-derived gravity field data to provide detailed insights into the spatial distribution of sedimentary density structures in the WKBB. We used 3D forward density modeling for the interpretation that incorporated constraints from existing geological and geophysical information. The gravity data interpretation and the 3D forward modeling showed that there are two modeled areas in the central subbasin that are characterized by very low density structures, with a maximum density of about 2000 kg/m3, indicating some type of hydrocarbon reservoir. One of the anticipated hydrocarbon reservoirs is located in the southern part of the central subbasin with a volume of about 250 km3 at a depth of about 3000 m in the Cretaceous/Jurassic layer. The other hydrocarbon reservoir should exist in the northern part of the central subbasin, with an average volume of about 300 km3 at a depth of about 2500 m.

  10. An estimate of the PH3, CH3D, and GeH4 abundances on Jupiter from the Voyager IRIS data at 4.5 microns

    NASA Technical Reports Server (NTRS)

    Drossart, P.; Encrenaz, T.; Combes, M.; Kunde, V.; Hanel, R.

    1982-01-01

    No evidence is found for large scale phosphine abundance variations over Jovian latitudes between -30 and +30 deg, in PH3, CH3D, and GeH4 abundances derived from the 2100-2250/cm region of the Voyager 1 IRIS spectra. The PH3/H2 value of (4.5 + or - 1.5) X 10 to the -7th derived from atmospheric regions corresponding to 170-200 K is 0.75 + or - 0.25 times the solar value, and suggests that the PH3/H2 ratio on Jupiter decreases with atmospheric pressure upon comparison with other PH3 determinations at 10 microns. In the 200-250 K region, CH3D/H2 and GeH4/H2 ratios of 2.0 X 10 to the -7th and 1.0 X 10 to the -9th, respectively, are derived within a factor of 2.0. Assuming a C/H value of 0.001, as derived from Voyager, the CH3D/H2 ratio obtained in this study implies a D/H ratio of 0.000018. This is in agreement with the interstellar medium value.

  11. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  12. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  13. RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot

    NASA Astrophysics Data System (ADS)

    Mostofi, N.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.

    2014-11-01

    3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.

  14. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  15. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  16. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  17. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  18. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  19. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  20. Recovering 3D human body configurations using shape contexts.

    PubMed

    Mori, Greg; Malik, Jitendra

    2006-07-01

    The problem we consider in this paper is to take a single two-dimensional image containing a human figure, locate the joint positions, and use these to estimate the body configuration and pose in three-dimensional space. The basic approach is to store a number of exemplar 2D views of the human body in a variety of different configurations and viewpoints with respect to the camera. On each of these stored views, the locations of the body joints (left elbow, right knee, etc.) are manually marked and labeled for future use. The input image is then matched to each stored view, using the technique of shape context matching in conjunction with a kinematic chain-based deformation model. Assuming that there is a stored view sufficiently similar in configuration and pose, the correspondence process will succeed. The locations of the body joints are then transferred from the exemplar view to the test shape. Given the 2D joint locations, the 3D body configuration and pose are then estimated using an existing algorithm. We can apply this technique to video by treating each frame independently--tracking just becomes repeated recognition. We present results on a variety of data sets.

  1. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  2. Posing Einstein's Question: Questioning Einstein's Pose.

    ERIC Educational Resources Information Center

    Topper, David; Vincent, Dwight E.

    2000-01-01

    Discusses the events surrounding a famous picture of Albert Einstein in which he poses near a blackboard containing a tensor form of his 10 field equations for pure gravity with a question mark after it. Speculates as to the content of Einstein's lecture and the questions he might have had about the equation. (Contains over 30 references.) (WRM)

  3. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  4. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  5. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  6. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  7. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  8. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  9. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  10. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  11. Stochastic estimation of biogeochemical parameters from Globcolour ocean colour satellite data in a North Atlantic 3D ocean coupled physical-biogeochemical model

    NASA Astrophysics Data System (ADS)

    Doron, Maéva; Brasseur, Pierre; Brankart, Jean-Michel; Losa, Svetlana N.; Melet, Angélique

    2013-05-01

    Biogeochemical parameters remain a major source of uncertainty in coupled physical-biogeochemical models of the ocean. In a previous study (Doron et al., 2011), a stochastic estimation method was developed to estimate a subset of biogeochemical model parameters from surface phytoplankton observations. The concept was tested in the context of idealised twin experiments performed with a 1/4° resolution model of the North Atlantic ocean. The method was based on ensemble simulations describing the model response to parameter uncertainty. The statistical estimation process relies on nonlinear transformations of the estimated space to cope with the non-Gaussian behaviour of the resulting joint probability distribution of the model state variables and parameters. In the present study, the same method is applied to real ocean colour observations, as delivered by the sensors SeaWiFS, MERIS and MODIS embarked on the satellites OrbView-2, Envisat and Aqua respectively. The main outcome of the present experiments is a set of regionalised biogeochemical parameters. The benefit is quantitatively assessed with an objective norm of the misfits, which automatically adapts to the different ecological regions. The chlorophyll concentration simulated by the model with this set of optimally derived parameters is closer to the observations than the reference simulation using uniform values of the parameters. In addition, the interannual and seasonal robustness of the estimated parameters is tested by repeating the same analysis using ocean colour observations from several months and several years. The results show the overall consistency of the ensemble of estimated parameters, which are also compared to the results of an independent study.

  12. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  13. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  14. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  15. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results.

  16. Rotation invariance principles in 2D/3D registration

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels C.; Jacob, Augustinus L.; Regazzoni, Pietro; Messmer, Peter

    2003-05-01

    2D/3D patient-to-computed tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 2D/3D registration is the fast that finding a registration includes sovling a minimization problem in six degrees-of-freedom in motion. This results in considerable time expenses since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations aroudn a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of its original value. The method was implemented and extensively tested on simulated x-ray images of a pelvis. We conclude that this hardware-indepenent optimization of 2D/3D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  17. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the

  18. A Lower Bound on Blowup Rates for the 3D Incompressible Euler Equation and a Single Exponential Beale-Kato-Majda Type Estimate

    NASA Astrophysics Data System (ADS)

    Chen, Thomas; Pavlović, Nataša

    2012-08-01

    We prove a Beale-Kato-Majda type criterion for the loss of regularity for solutions of the incompressible Euler equations in {Hs({R}^3)} , for {s>5/2} . Instead of double exponential estimates of Beale-Kato-Majda type, we obtain a single exponential bound on {|u(t)|_{H^s}} involving the length parameter introduced by Constantin in (SIAM Rev. 36(1):73-98, 1994). In particular, we derive lower bounds on the blowup rate of such solutions.

  19. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  20. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  1. A Computational Framework for Age-at-Death Estimation from the Skeleton: Surface and Outline Analysis of 3D Laser Scans of the Adult Pubic Symphysis.

    PubMed

    Stoyanova, Detelina K; Algee-Hewitt, Bridget F B; Kim, Jieun; Slice, Dennis E

    2017-02-28

    In forensic anthropology, age-at-death estimation typically requires the macroscopic assessment of the skeletal indicator and its association with a phase or score. High subjectivity and error are the recognized disadvantages of this approach, creating a need for alternative tools that enable the objective and mathematically robust assessment of true chronological age. We describe, here, three fully computational, quantitative shape analysis methods and a combinatory approach that make use of three-dimensional laser scans of the pubic symphysis. We report a novel age-related shape measure, focusing on the changes observed in the ventral margin curvature, and refine two former methods, whose measures capture the flatness of the symphyseal surface. We show how we can decrease age-estimation error and improve prior results by combining these outline and surface measures in two multivariate regression models. The presented models produce objective age-estimates that are comparable to current practices with root-mean-square-errors between 13.7 and 16.5 years.

  2. Automatic pose correction for image-guided nonhuman primate brain surgery planning

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Chen, Antong; Hines, Catherine; Dogdas, Belma; Bone, Ashleigh; Lodge, Kenneth; O'Malley, Stacey; Winkelmann, Christopher T.; Bagchi, Ansuman; Lubbers, Laura S.; Uslaner, Jason M.; Johnson, Colena; Renger, John; Zariwala, Hatim A.

    2016-03-01

    Intracranial delivery of recombinant DNA and neurochemical analysis in nonhuman primate (NHP) requires precise targeting of various brain structures via imaging derived coordinates in stereotactic surgeries. To attain targeting precision, the surgical planning needs to be done on preoperative three dimensional (3D) CT and/or MR images, in which the animals head is fixed in a pose identical to the pose during the stereotactic surgery. The matching of the image to the pose in the stereotactic frame can be done manually by detecting key anatomical landmarks on the 3D MR and CT images such as ear canal and ear bar zero position. This is not only time intensive but also prone to error due to the varying initial poses in the images which affects both the landmark detection and rotation estimation. We have introduced a fast, reproducible, and semi-automatic method to detect the stereotactic coordinate system in the image and correct the pose. The method begins with a rigid registration of the subject images to an atlas and proceeds to detect the anatomical landmarks through a sequence of optimization, deformable and multimodal registration algorithms. The results showed similar precision (maximum difference of 1.71 in average in-plane rotation) to a manual pose correction.

  3. Regoliths in 3-D

    NASA Technical Reports Server (NTRS)

    Grant, John; Cheng, Andrew; Delamere, Allen; Gorevan, Steven; Korotev, Randy; McKay, David; Schmitt, Harrison; Zarnecki, John

    1996-01-01

    A planetary regolith is any layer of fragments, unconsolidated material that may or may not be textually or compositionally altered relative to underlying substrate and occurs on the outer surface of a solar system body. This includes fragmented material from volcanic, sedimentary, and meteoritic infall sources, and derived by any process (e.g. impact and all other endogenic or exogenic processes). Many measurements that can be made from orbit or from Earth-based observations provide information only about the uppermost portions of a regolith and not the underlying substrate(s). Thus an understanding of the formation processes, physical properties, composition, and evolution of planetary regoliths is essential in answering scientific questions posed by the Committee on Planetary and Lunar Exploration (COMPLEX). This paper provides examples of measurements required to answer these critical science questions.

  4. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  5. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  6. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  7. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  8. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  9. FUN3D Manual: 13.1

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  10. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. Estimation of the maximum allowable loading amount of COD in Luoyuan Bay by a 3-D COD transport and transformation model

    NASA Astrophysics Data System (ADS)

    Wu, Jialin; Li, Keqiang; Shi, Xiaoyong; Liang, Shengkang; Han, Xiurong; Ma, Qimin; Wang, Xiulin

    2014-08-01

    The rapid economic and social developments in the Luoyuan and Lianjiang counties of Fujian Province, China, raise certain environment and ecosystem issues. The unusual phytoplankton bloom and eutrophication, for example, have increased in severity in Luoyuan Bay (LB). The constant increase of nutrient loads has largely caused the environmental degradation in LB. Several countermeasures have been implemented to solve these environmental problems. The most effective of these strategies is the reduction of pollutant loadings into the sea in accordance with total pollutant load control (TPLC) plans. A combined three-dimensional hydrodynamic transport-transformation model was constructed to estimate the marine environmental capacity of chemical oxygen demand (COD). The allowed maximum loadings for each discharge unit in LB were calculated with applicable simulation results. The simulation results indicated that the environmental capacity of COD is approximately 11×104 t year-1 when the water quality complies with the marine functional zoning standards for LB. A pollutant reduction scheme to diminish the present levels of mariculture- and domestic-based COD loadings is based on the estimated marine COD environmental capacity. The obtained values imply that the LB waters could comply with the targeted water quality criteria. To meet the revised marine functional zoning standards, discharge loadings from discharge units 1 and 11 should be reduced to 996 and 3236 t year-1, respectively.

  14. Effect of segmentation errors on 3D-to-2D registration of implant models in X-ray images.

    PubMed

    Mahfouz, Mohamed R; Hoff, William A; Komistek, Richard D; Dennis, Douglas A

    2005-02-01

    In many biomedical applications, it is desirable to estimate the three-dimensional (3D) position and orientation (pose) of a metallic rigid object (such as a knee or hip implant) from its projection in a two-dimensional (2D) X-ray image. If the geometry of the object is known, as well as the details of the image formation process, then the pose of the object with respect to the sensor can be determined. A common method for 3D-to-2D registration is to first segment the silhouette contour from the X-ray image; that is, identify all points in the image that belong to the 2D silhouette and not to the background. This segmentation step is then followed by a search for the 3D pose that will best match the observed contour with a predicted contour. Although the silhouette of a metallic object is often clearly visible in an X-ray image, adjacent tissue and occlusions can make the exact location of the silhouette contour difficult to determine in places. Occlusion can occur when another object (such as another implant component) partially blocks the view of the object of interest. In this paper, we argue that common methods for segmentation can produce errors in the location of the 2D contour, and hence errors in the resulting 3D estimate of the pose. We show, on a typical fluoroscopy image of a knee implant component, that interactive and automatic methods for segmentation result in segmented contours that vary significantly. We show how the variability in the 2D contours (quantified by two different metrics) corresponds to variability in the 3D poses. Finally, we illustrate how traditional segmentation methods can fail completely in the (not uncommon) cases of images with occlusion.

  15. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    NASA Astrophysics Data System (ADS)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  16. Smoothing 3-D Data for Torpedo Paths

    DTIC Science & Technology

    1978-05-01

    parametric estimation , consider data collected at 200 sequential observation times (e.g., 800 to 1,000 for the 3-D data used in this section). Samples...sample when the fitted curve is a straight line (refer to Appendix B). (e) Parametric estimation could also be modified to delete some samples (e.g

  17. Variation in the measurement of cranial volume and surface area using 3D laser scanning technology.

    PubMed

    Sholts, Sabrina B; Wärmländer, Sebastian K T S; Flores, Louise M; Miller, Kevin W P; Walker, Phillip L

    2010-07-01

    Three-dimensional (3D) laser scanner models of human crania can be used for forensic facial reconstruction, and for obtaining craniometric data useful for estimating age, sex, and population affinity of unidentified human remains. However, the use of computer-generated measurements in a casework setting requires the measurement precision to be known. Here, we assess the repeatability and precision of cranial volume and surface area measurements using 3D laser scanner models created by different operators using different protocols for collecting and processing data. We report intraobserver measurement errors of 0.2% and interobserver errors of 2% of the total area and volume values, suggesting that observer-related errors do not pose major obstacles for sharing, combining, or comparing such measurements. Nevertheless, as no standardized procedure exists for area or volume measurements from 3D models, it is imperative to report the scanning and postscanning protocols employed when such measurements are conducted in a forensic setting.

  18. 3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies

    PubMed Central

    Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno

    2016-01-01

    We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628

  19. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  20. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  1. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  2. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    PubMed Central

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344

  3. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  4. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  5. Estimates of mercury flux into the United States from non-local and global sources: results from a 3-D CTM simulation

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Kotamarthi, V. R.; Streets, D.; Kim, M.; Crist, K.

    2008-11-01

    The sensitivity of Hg concentration and deposition in the United States to emissions in China was investigated by using a global chemical transport model: Model for Ozone and Related Chemical Tracers (MOZART). Two forms of gaseous Hg were included in the model: elemental Hg (HG(0) and oxidized or reactive Hg (HGO). We simulated three different emission scenarios to evaluate the model's sensitivity. One scenario included no emissions from China, while the others were based on different estimates of Hg emissions in China. The results indicated, in general, that when Hg emissions were included, HG(0) concentrations increased both locally and globally. Increases in Hg concentrations in the United States were greatest during spring and summer, by as much as 7%. Ratios of calculated concentrations of Hg and CO near the source region in eastern Asia agreed well with ratios based on measurements. Increases similar to those observed for HG(0) were also calculated for deposition of HGO. Calculated increases in wet and dry deposition in the United States were 5 7% and 5 9%, respectively. The results indicate that long-range transcontinental transport of Hg has a non-negligible impact on Hg deposition levels in the United States.

  6. Predicting binding poses and affinities for protein - ligand complexes in the 2015 D3R Grand Challenge using a physical model with a statistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Grudinin, Sergei; Kadukova, Maria; Eisenbarth, Andreas; Marillet, Simon; Cazals, Frédéric

    2016-09-01

    The 2015 D3R Grand Challenge provided an opportunity to test our new model for the binding free energy of small molecules, as well as to assess our protocol to predict binding poses for protein-ligand complexes. Our pose predictions were ranked 3-9 for the HSP90 dataset, depending on the assessment metric. For the MAP4K dataset the ranks are very dispersed and equal to 2-35, depending on the assessment metric, which does not provide any insight into the accuracy of the method. The main success of our pose prediction protocol was the re-scoring stage using the recently developed Convex-PL potential. We make a thorough analysis of our docking predictions made with AutoDock Vina and discuss the effect of the choice of rigid receptor templates, the number of flexible residues in the binding pocket, the binding pocket size, and the benefits of re-scoring. However, the main challenge was to predict experimentally determined binding affinities for two blind test sets. Our affinity prediction model consisted of two terms, a pairwise-additive enthalpy, and a non pairwise-additive entropy. We trained the free parameters of the model with a regularized regression using affinity and structural data from the PDBBind database. Our model performed very well on the training set, however, failed on the two test sets. We explain the drawback and pitfalls of our model, in particular in terms of relative coverage of the test set by the training set and missed dynamical properties from crystal structures, and discuss different routes to improve it.

  7. A joint data assimilation system (Tan-Tracker) to simultaneously estimate surface CO2 fluxes and 3-D atmospheric CO2 concentrations from observations

    NASA Astrophysics Data System (ADS)

    Tian, X.; Xie, Z.; Liu, Y.; Cai, Z.; Fu, Y.; Zhang, H.; Feng, L.

    2013-09-01

    To quantitatively estimate CO2 surface fluxes (CFs) from atmospheric observations, a joint data assimilation system ("Tan-Tracker") is developed by incorporating a joint data assimilation framework into the GEOS-Chem atmospheric transport model. In Tan-Tracker, we choose an identity operator as the CF dynamical model to describe the CFs' evolution, which constitutes an augmented dynamical model together with the GEOS-Chem atmospheric transport model. In this case, the large-scale vector made up of CFs and CO2 concentrations is taken as the prognostic variable for the augmented dynamical model. And thus both CO2 concentrations and CFs are jointly assimilated by using the atmospheric observations (e.g., the in-situ observations or satellite measurements). In contrast, in the traditional joint data assimilation frameworks, CFs are usually treated as the model parameters and form a state-parameter augmented vector jointly with CO2 concentrations. The absence of a CF dynamical model will certainly result in a large waste of observed information since any useful information for CFs' improvement achieved by the current data assimilation procedure could not be used in the next assimilation cycle. Observing system simulation experiments (OSSEs) are carefully designed to evaluate the Tan-Tracker system in comparison to its simplified version (referred to as TT-S) with only CFs taken as the prognostic variables. It is found that our Tan-Tracker system is capable of outperforming TT-S with higher assimilation precision for both CO2 concentrations and CO2 fluxes, mainly due to the simultaneous assimilation of CO2 concentrations and CFs in our Tan-Tracker data assimilation system.

  8. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  9. Learning 3D Object Templates by Quantizing Geometry and Appearance Spaces.

    PubMed

    Hu, Wenze; Zhu, Song-Chun

    2015-06-01

    While 3D object-centered shape-based models are appealing in comparison with 2D viewer-centered appearance-based models for their lower model complexities and potentially better view generalizabilities, the learning and inference of 3D models has been much less studied in the recent literature due to two factors: i) the enormous complexities of 3D shapes in geometric space; and ii) the gap between 3D shapes and their appearances in images. This paper aims at tackling the two problems by studying an And-Or Tree (AoT) representation that consists of two parts: i) a geometry-AoT quantizing the geometry space, i.e. the possible compositions of 3D volumetric parts and 2D surfaces within the volumes; and ii) an appearance-AoT quantizing the appearance space, i.e. the appearance variations of those shapes in different views. In this AoT, an And-node decomposes an entity into constituent parts, and an Or-node represents alternative ways of decompositions. Thus it can express a combinatorial number of geometry and appearance configurations through small dictionaries of 3D shape primitives and 2D image primitives. In the quantized space, the problem of learning a 3D object template is transformed to a structure search problem which can be efficiently solved in a dynamic programming algorithm by maximizing the information gain. We focus on learning 3D car templates from the AoT and collect a new car dataset featuring more diverse views. The learned car templates integrate both the shape-based model and the appearance-based model to combine the benefits of both. In experiments, we show three aspects: 1) the AoT is more efficient than the frequently used octree method in space representation; 2) the learned 3D car template matches the state-of-the art performances on car detection and pose estimation in a public multi-view car dataset; and 3) in our new dataset, the learned 3D template solves the joint task of simultaneous object detection, pose/view estimation, and part

  10. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  11. Combining depth and gray images for fast 3D object recognition

    NASA Astrophysics Data System (ADS)

    Pan, Wang; Zhu, Feng; Hao, Yingming

    2016-10-01

    Reliable and stable visual perception systems are needed for humanoid robotic assistants to perform complex grasping and manipulation tasks. The recognition of the object and its precise 6D pose are required. This paper addresses the challenge of detecting and positioning a textureless known object, by estimating its complete 6D pose in cluttered scenes. A 3D perception system is proposed in this paper, which can robustly recognize CAD models in cluttered scenes for the purpose of grasping with a mobile manipulator. Our approach uses a powerful combination of two different camera technologies, Time-Of-Flight (TOF) and RGB, to segment the scene and extract objects. Combining the depth image and gray image to recognize instances of a 3D object in the world and estimate their 3D poses. The full pose estimation process is based on depth images segmentation and an efficient shape-based matching. At first, the depth image is used to separate the supporting plane of objects from the cluttered background. Thus, cluttered backgrounds are circumvented and the search space is extremely reduced. And a hierarchical model based on the geometry information of a priori CAD model of the object is generated in the offline stage. Then using the hierarchical model we perform a shape-based matching in 2D gray images. Finally, we validate the proposed method in a number of experiments. The results show that utilizing depth and gray images together can reach the demand of a time-critical application and reduce the error rate of object recognition significantly.

  12. Auto convergence for stereoscopic 3D cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  13. Design of monocular multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2001-06-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have developed a 3D HMD system using the monocular stereoscopic display. This paper shows that the 3D vision system using the monocular stereoscopic display and capturing camera builds a 3D virtual space for a telemanipulation using a captured real 3D image. In this paper, we propose the monocular stereoscopic 3D display and capturing camera for a tele- manipulation system. In addition, we describe the result of depth estimation using the multi-focus retinal images.

  14. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  15. Rubber Impact on 3D Textile Composites

    NASA Astrophysics Data System (ADS)

    Heimbs, Sebastian; Van Den Broucke, Björn; Duplessis Kergomard, Yann; Dau, Frederic; Malherbe, Benoit

    2012-06-01

    A low velocity impact study of aircraft tire rubber on 3D textile-reinforced composite plates was performed experimentally and numerically. In contrast to regular unidirectional composite laminates, no delaminations occur in such a 3D textile composite. Yarn decohesions, matrix cracks and yarn ruptures have been identified as the major damage mechanisms under impact load. An increase in the number of 3D warp yarns is proposed to improve the impact damage resistance. The characteristic of a rubber impact is the high amount of elastic energy stored in the impactor during impact, which was more than 90% of the initial kinetic energy. This large geometrical deformation of the rubber during impact leads to a less localised loading of the target structure and poses great challenges for the numerical modelling. A hyperelastic Mooney-Rivlin constitutive law was used in Abaqus/Explicit based on a step-by-step validation with static rubber compression tests and low velocity impact tests on aluminium plates. Simulation models of the textile weave were developed on the meso- and macro-scale. The final correlation between impact simulation results on 3D textile-reinforced composite plates and impact test data was promising, highlighting the potential of such numerical simulation tools.

  16. Accurate 3D kinematic measurement of temporomandibular joint using X-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Matsumoto, Akiko; Sugamoto, Kazuomi; Matsumoto, Ken; Kakimoto, Naoya; Yura, Yoshiaki

    2014-04-01

    Accurate measurement and analysis of 3D kinematics of temporomandibular joint (TMJ) is very important for assisting clinical diagnosis and treatment of prosthodontics and orthodontics, and oral surgery. This study presents a new 3D kinematic measurement technique of the TMJ using X-ray fluoroscopic images, which can easily obtain the TMJ kinematic data in natural motion. In vivo kinematics of the TMJ (maxilla and mandibular bone) is determined using a feature-based 2D/3D registration, which uses beads silhouette on fluoroscopic images and 3D surface bone models with beads. The 3D surface models of maxilla and mandibular bone with beads were created from CT scans data of the subject using the mouthpiece with the seven strategically placed beads. In order to validate the accuracy of pose estimation for the maxilla and mandibular bone, computer simulation test was performed using five patterns of synthetic tantalum beads silhouette images. In the clinical applications, dynamic movement during jaw opening and closing was conducted, and the relative pose of the mandibular bone with respect to the maxilla bone was determined. The results of computer simulation test showed that the root mean square errors were sufficiently smaller than 1.0 mm and 1.0 degree. In the results of clinical application, during jaw opening from 0.0 to 36.8 degree of rotation, mandibular condyle exhibited 19.8 mm of anterior sliding relative to maxillary articular fossa, and these measurement values were clinically similar to the previous reports. Consequently, present technique was thought to be suitable for the 3D TMJ kinematic analysis.

  17. SB3D User Manual, Santa Barbara 3D Radiative Transfer Model

    SciTech Connect

    O'Hirok, William

    1999-01-01

    SB3D is a three-dimensional atmospheric and oceanic radiative transfer model for the Solar spectrum. The microphysics employed in the model are the same as used in the model SBDART. It is assumed that the user of SB3D is familiar with SBDART and IDL. SB3D differs from SBDART in that computations are conducted on media in three-dimensions rather than a single column (i.e. plane-parallel), and a stochastic method (Monte Carlo) is employed instead of a numerical approach (Discrete Ordinates) for estimating a solution to the radiative transfer equation. Because of these two differences between SB3D and SBDART, the input and running of SB3D is more unwieldy and requires compromises between model performance and computational expense. Hence, there is no one correct method for running the model and the user must develop a sense to the proper input and configuration of the model.

  18. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  19. On detailed 3D reconstruction of large indoor environments

    NASA Astrophysics Data System (ADS)

    Bondarev, Egor

    2015-03-01

    In this paper we present techniques for highly detailed 3D reconstruction of extra large indoor environments. We discuss the benefits and drawbacks of low-range, far-range and hybrid sensing and reconstruction approaches. The proposed techniques for low-range and hybrid reconstruction, enabling the reconstruction density of 125 points/cm3 on large 100.000 m3 models, are presented in detail. The techniques tackle the core challenges for the above requirements, such as a multi-modal data fusion (fusion of a LIDAR data with a Kinect data), accurate sensor pose estimation, high-density scanning and depth data noise filtering. Other important aspects for extra large 3D indoor reconstruction are the point cloud decimation and real-time rendering. In this paper, we present a method for planar-based point cloud decimation, allowing for reduction of a point cloud size by 80-95%. Besides this, we introduce a method for online rendering of extra large point clouds enabling real-time visualization of huge cloud spaces in conventional web browsers.

  20. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  1. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  2. Action and gait recognition from recovered 3-D human joints.

    PubMed

    Gu, Junxia; Ding, Xiaoqing; Wang, Shengjin; Wu, Youshou

    2010-08-01

    A common viewpoint-free framework that fuses pose recovery and classification for action and gait recognition is presented in this paper. First, a markerless pose recovery method is adopted to automatically capture the 3-D human joint and pose parameter sequences from volume data. Second, multiple configuration features (combination of joints) and movement features (position, orientation, and height of the body) are extracted from the recovered 3-D human joint and pose parameter sequences. A hidden Markov model (HMM) and an exemplar-based HMM are then used to model the movement features and configuration features, respectively. Finally, actions are classified by a hierarchical classifier that fuses the movement features and the configuration features, and persons are recognized from their gait sequences with the configuration features. The effectiveness of the proposed approach is demonstrated with experiments on the Institut National de Recherche en Informatique et Automatique Xmas Motion Acquisition Sequences data set.

  3. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  4. Segmentation of densely populated cell nuclei from confocal image stacks using 3D non-parametric shape priors.

    PubMed

    Ong, Lee-Ling S; Wang, Mengmeng; Dauwels, Justin; Asada, H Harry

    2014-01-01

    An approach to jointly estimate 3D shapes and poses of stained nuclei from confocal microscopy images, using statistical prior information, is presented. Extracting nuclei boundaries from our experimental images of cell migration is challenging due to clustered nuclei and variations in their shapes. This issue is formulated as a maximum a posteriori estimation problem. By incorporating statistical prior models of 3D nuclei shapes into level set functions, the active contour evolutions applied on the images is constrained. A 3D alignment algorithm is developed to build the training databases and to match contours obtained from the images to them. To address the issue of aligning the model over multiple clustered nuclei, a watershed-like technique is used to detect and separate clustered regions prior to active contour evolution. Our method is tested on confocal images of endothelial cells in microfluidic devices, compared with existing approaches.

  5. A Bayesian framework for human body pose tracking from depth image sequences.

    PubMed

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach.

  6. SACR ADVance 3-D Cartesian Cloud Cover (SACR-ADV-3D3C) product

    DOE Data Explorer

    Meng Wang, Tami Toto, Eugene Clothiaux, Katia Lamer, Mariko Oue

    2017-03-08

    SACR-ADV-3D3C remaps the outputs of SACRCORR for cross-wind range-height indicator (CW-RHI) scans to a Cartesian grid and reports reflectivity CFAD and best estimate domain averaged cloud fraction. The final output is a single NetCDF file containing all aforementioned corrected radar moments remapped on a 3-D Cartesian grid, the SACR reflectivity CFAD, a profile of best estimate cloud fraction, a profile of maximum observable x-domain size (xmax), a profile time to horizontal distance estimate and a profile of minimum observable reflectivity (dBZmin).

  7. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  8. 3D Buckligami: Digital Matter

    NASA Astrophysics Data System (ADS)

    van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin

    2014-03-01

    We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.

  9. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  10. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  11. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  12. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  13. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  14. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  15. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  16. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. Tracking earthquake source evolution in 3-D

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Gorbatov, A.; Spiliopoulos, S.

    2014-08-01

    Starting from the hypocentre, the point of initiation of seismic energy, we seek to estimate the subsequent trajectory of the points of emission of high-frequency energy in 3-D, which we term the `evocentres'. We track these evocentres as a function of time by energy stacking for putative points on a 3-D grid around the hypocentre that is expanded as time progresses, selecting the location of maximum energy release as a function of time. The spatial resolution in the neighbourhood of a target point can be simply estimated by spatial mapping using the properties of isochrons from the stations. The mapping of a seismogram segment to space is by inverse slowness, and thus more distant stations have a broader spatial contribution. As in hypocentral estimation, the inclusion of a wide azimuthal distribution of stations significantly enhances 3-D capability. We illustrate this approach to tracking source evolution in 3-D by considering two major earthquakes, the 2007 Mw 8.1 Solomons islands event that ruptured across a plate boundary and the 2013 Mw 8.3 event 610 km beneath the Sea of Okhotsk. In each case we are able to provide estimates of the evolution of high-frequency energy that tally well with alternative schemes, but also to provide information on the 3-D characteristics that is not available from backprojection from distant networks. We are able to demonstrate that the major characteristics of event rupture can be captured using just a few azimuthally distributed stations, which opens the opportunity for the approach to be used in a rapid mode immediately after a major event to provide guidance for, for example tsunami warning for megathrust events.

  20. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  1. Student-Posed Problems

    NASA Astrophysics Data System (ADS)

    Harper, Kathleen A.; Etkina, Eugenia

    2002-10-01

    As part of weekly reports,1 structured journals in which students answer three standard questions each week, they respond to the prompt, If I were the instructor, what questions would I ask or problems assign to determine if my students understood the material? An initial analysis of the results shows that some student-generated problems indicate fundamental misunderstandings of basic physical concepts. A further investigation explores the relevance of the problems to the week's material, whether the problems are solvable, and the type of problems (conceptual or calculation-based) written. Also, possible links between various characteristics of the problems and conceptual achievement are being explored. The results of this study spark many more questions for further work. A summary of current findings will be presented, along with its relationship to previous work concerning problem posing.2 1Etkina, E. Weekly Reports;A Two-Way Feedback Tool, Science Education, 84, 594-605 (2000). 2Mestre, J.P., Probing Adults Conceptual Understanding and Transfer of Learning Via Problem Posing, Journal of Applied Developmental Psychology, 23, 9-50 (2002).

  2. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  3. 3D polymer scaffold arrays.

    PubMed

    Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik

    2011-01-01

    We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.

  4. Measurement error analysis of the 3D four-wheel aligner

    NASA Astrophysics Data System (ADS)

    Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun

    2013-10-01

    Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.

  5. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  6. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  7. A monolithic 3D-0D coupled closed-loop model of the heart and the vascular system: Experiment-based parameter estimation for patient-specific cardiac mechanics.

    PubMed

    Hirschvogel, Marc; Bassilious, Marina; Jagschies, Lasse; Wildhirt, Stephen M; Gee, Michael W

    2016-10-15

    A model for patient-specific cardiac mechanics simulation is introduced, incorporating a 3-dimensional finite element model of the ventricular part of the heart, which is coupled to a reduced-order 0-dimensional closed-loop vascular system, heart valve, and atrial chamber model. The ventricles are modeled by a nonlinear orthotropic passive material law. The electrical activation is mimicked by a prescribed parameterized active stress acting along a generic muscle fiber orientation. Our activation function is constructed such that the start of ventricular contraction and relaxation as well as the active stress curve's slope are parameterized. The imaging-based patient-specific ventricular model is prestressed to low end-diastolic pressure to account for the imaged, stressed configuration. Visco-elastic Robin boundary conditions are applied to the heart base and the epicardium to account for the embedding surrounding. We treat the 3D solid-0D fluid interaction as a strongly coupled monolithic problem, which is consistently linearized with respect to 3D solid and 0D fluid model variables to allow for a Newton-type solution procedure. The resulting coupled linear system of equations is solved iteratively in every Newton step using 2  ×  2 physics-based block preconditioning. Furthermore, we present novel efficient strategies for calibrating active contractile and vascular resistance parameters to experimental left ventricular pressure and stroke volume data gained in porcine experiments. Two exemplary states of cardiovascular condition are considered, namely, after application of vasodilatory beta blockers (BETA) and after injection of vasoconstrictive phenylephrine (PHEN). The parameter calibration to the specific individual and cardiovascular state at hand is performed using a 2-stage nonlinear multilevel method that uses a low-fidelity heart model to compute a parameter correction for the high-fidelity model optimization problem. We discuss 2 different low

  8. Face recognition based on matching of local features on 3D dynamic range sequences

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  9. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  10. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  11. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  12. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    PubMed

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  13. Macrophage podosomes go 3D.

    PubMed

    Van Goethem, Emeline; Guiet, Romain; Balor, Stéphanie; Charrière, Guillaume M; Poincloux, Renaud; Labrousse, Arnaud; Maridonneau-Parini, Isabelle; Le Cabec, Véronique

    2011-01-01

    Macrophage tissue infiltration is a critical step in the immune response against microorganisms and is also associated with disease progression in chronic inflammation and cancer. Macrophages are constitutively equipped with specialized structures called podosomes dedicated to extracellular matrix (ECM) degradation. We recently reported that these structures play a critical role in trans-matrix mesenchymal migration mode, a protease-dependent mechanism. Podosome molecular components and their ECM-degrading activity have been extensively studied in two dimensions (2D), but yet very little is known about their fate in three-dimensional (3D) environments. Therefore, localization of podosome markers and proteolytic activity were carefully examined in human macrophages performing mesenchymal migration. Using our gelled collagen I 3D matrix model to obligate human macrophages to perform mesenchymal migration, classical podosome markers including talin, paxillin, vinculin, gelsolin, cortactin were found to accumulate at the tip of F-actin-rich cell protrusions together with β1 integrin and CD44 but not β2 integrin. Macrophage proteolytic activity was observed at podosome-like protrusion sites using confocal fluorescence microscopy and electron microscopy. The formation of migration tunnels by macrophages inside the matrix was accomplished by degradation, engulfment and mechanic compaction of the matrix. In addition, videomicroscopy revealed that 3D F-actin-rich protrusions of migrating macrophages were as dynamic as their 2D counterparts. Overall, the specifications of 3D podosomes resembled those of 2D podosome rosettes rather than those of individual podosomes. This observation was further supported by the aspect of 3D podosomes in fibroblasts expressing Hck, a master regulator of podosome rosettes in macrophages. In conclusion, human macrophage podosomes go 3D and take the shape of spherical podosome rosettes when the cells perform mesenchymal migration. This work

  14. 3D Printed Bionic Nanodevices.

    PubMed

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  15. Robust pose determination for autonomous docking

    SciTech Connect

    Goddard, J.S.; Jatko, W.B.; Ferrell, R.K.; Gleason, S.S.

    1995-12-31

    This paper describes current work at the Oak Ridge National Laboratory to develop a robotic vision system capable of recognizing designated objects by their intrinsic geometry. This method, based on single camera vision, combines point features and a model-based technique using geometric feature matching for the pose calculation. In this approach, 2-D point features are connected into higher-order shapes and then matched with corresponding features of the model. Pose estimates are made using a closed-form point solution based on model features of four coplanar points. Rotations are represented by quaternions that simplify the calculations in determining the least squares solution for the coordinate transformation. This pose determination method including image acquisition, feature extraction, feature correspondence, and pose calculation has been implemented on a real-time system using a standard camera and image processing hardware. Experimental results are given for relative error measurements.

  16. Pose and Wind Estimation for Autonomous Parafoils

    DTIC Science & Technology

    2014-09-01

    1204, Arlington, VA 22202–4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE...learn from each one of my dissertation committee members: in the classroom and in the field. To professors Roberto Cristi, Oleg Yakimenko, Xiaoping Yun...field testing is the best engi- neering classroom there is! Thanks are also due to the leadership at Arcturus UAV, D’Milo Hallerberg and Erik Folkestad

  17. Petal, terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The metallic object at lower right is part of the lander's low-gain antenna. This image is part of a 3D 'monster

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  18. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3