Science.gov

Sample records for 3d gaze estimation

  1. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  2. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  3. 3D recovery of human gaze in natural environments

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  4. Estimating the gaze of a virtuality human.

    PubMed

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV. PMID:23428453

  5. Parameters of the human 3D gaze while observing portable autostereoscopic display: a model and measurement results

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hanhela, Marianne; Gotchev, Atanas; Utirainen, Timo; Jumisko-Pyykkö, Satu; Hannuksela, Miska

    2012-02-01

    We present an approach to measure and model the parameters of human point-of-gaze (PoG) in 3D space. Our model considers the following three parameters: position of the gaze in 3D space, volume encompassed by the gaze and time for the gaze to arrive on the desired target. Extracting the 3D gaze position from binocular gaze data is hindered by three problems. The first problem is the lack of convergence - due to micro saccadic movements the optical lines of both eyes rarely intersect at a point in space. The second problem is resolution - the combination of short observation distance and limited comfort disparity zone typical for a mobile 3D display does not allow the depth of the gaze position to be reliably extracted. The third problem is measurement noise - due to the limited display size, the noise range is close to the range of properly measured data. We have developed a methodology which allows us to suppress most of the measurement noise. This allows us to estimate the typical time which is needed for the point-of-gaze to travel in x, y or z direction. We identify three temporal properties of the binocular PoG. The first is reaction time, which is the minimum time that the vision reacts to a stimulus position change, and is measured as the time between the event and the time the PoG leaves the proximity of the old stimulus position. The second is the travel time of the PoG between the old and new stimulus position. The third is the time-to-arrive, which is the time combining the reaction time, travel time, and the time required for the PoG to settle in the new position. We present the method for filtering the PoG outliers, for deriving the PoG center from binocular eye-tracking data and for calculating the gaze volume as a function of the distance between PoG and the observer. As an outcome from our experiments we present binocular heat maps aggregated over all observers who participated in a viewing test. We also show the mean values for all temporal

  6. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. PMID:25982719

  7. A kinematic model for 3-D head-free gaze-shifts

    PubMed Central

    Daemi, Mehdi; Crawford, J. Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816

  8. A kinematic model for 3-D head-free gaze-shifts.

    PubMed

    Daemi, Mehdi; Crawford, J Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816

  9. Quality control of 3D Geological Models using an Attention Model based on Gaze

    NASA Astrophysics Data System (ADS)

    Busschers, Freek S.; van Maanen, Peter-Paul; Brouwer, Anne-Marie

    2014-05-01

    The Geological Survey of the Netherlands (GSN) produces 3D stochastic geological models of the upper 50 meters of the Dutch subsurface. The voxel models are regarded essential in answering subsurface questions on, for example, aggregate resources, groundwater flow, land subsidence studies and the planning of large-scale infrastructural works such as tunnels. GeoTOP is the most recent and detailed generation of 3D voxel models. This model describes 3D lithological variability up to a depth of 50 m using voxels of 100*100*0.5m. Due to the expected increase in data-flow, model output and user demands, the development of (semi-)automated quality control systems is getting more important in the near future. Besides numerical control systems, capturing model errors as seen from the expert geologist viewpoint is of increasing interest. We envision the use of eye gaze to support and speed up detection of errors in the geological voxel models. As a first step in this direction we explore gaze behavior of 12 geological experts from the GSN during quality control of part of the GeoTOP 3D geological model using an eye-tracker. Gaze is used as input of an attention model that results in 'attended areas' for each individual examined image of the GeoTOP model and each individual expert. We compared these attended areas to errors as marked by the experts using a mouse. Results show that: 1) attended areas as determined from experts' gaze data largely match with GeoTOP errors as indicated by the experts using a mouse, and 2) a substantial part of the match can be reached using only gaze data from the first few seconds of the time geologists spend to search for errors. These results open up the possibility of faster GeoTOP model control using gaze if geologists accept a small decrease of error detection accuracy. Attention data may also be used to make independent comparisons between different geologists varying in focus and expertise. This would facilitate a more effective use of

  10. Using natural versus artificial stimuli to perform calibration for 3D gaze tracking

    NASA Astrophysics Data System (ADS)

    Maggia, Christophe; Guyader, Nathalie; Guérin-Dugué, Anne

    2013-03-01

    The presented study tests which type of stereoscopic image, natural or artificial, is more adapted to perform efficient and reliable calibration in order to track the gaze of observers in 3D space using classical 2D eye tracker. We measured the horizontal disparities, i.e. the difference between the x coordinates of the two eyes obtained using a 2D eye tracker. This disparity was recorded for each observer and for several target positions he had to fixate. Target positions were equally distributed in the 3D space, some on the screen (with a null disparity), some behind the screen (uncrossed disparity) and others in front of the screen (crossed disparity). We tested different regression models (linear and non linear) to explain either the true disparity or the depth with the measured disparity. Models were tested and compared on their prediction error for new targets at new positions. First of all, we found that we obtained more reliable disparities measures when using natural stereoscopic images rather than artificial. Second, we found that overall a non-linear model was more efficient. Finally, we discuss the fact that our results were observer dependent, with variability's between the observer's behavior when looking at 3D stimuli. Because of this variability, we proposed to compute observer specific model to accurately predict their gaze position when exploring 3D stimuli.

  11. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery. PMID:20426007

  12. Eye gaze estimation from the elliptical features of one iris

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Zhang, Tai-Ning; Chang, Sheng-Jiang

    2011-04-01

    The accuracy of eye gaze estimation using image information is affected by several factors which include image resolution, anatomical structure of the eye, and posture changes. The irregular movements of the head and eye create issues that are currently being researched to enable better use of this key technology. In this paper, we describe an effective way of estimating eye gaze from the elliptical features of one iris under the conditions of not using an auxiliary light source, a head fixing equipment, or multiple cameras. First, we provide preliminary estimation of the gaze direction, and then we obtain the vectors which describe the translation and rotation of the eyeball, by applying a central projection method on the plane which passes through the line-of-sight. This helps us avoid the complex computations involved in previous methods. We also disambiguate the solution based on experimental findings. Second, error correction is conducted on a back propagation neural network trained by a sample collection of translation and rotation vectors. Extensive experimental studies are conducted to assess the efficiency, and robustness of our method. Results reveal that our method has a better performance compared to a typical previous method.

  13. Gaze estimation using a hybrid appearance and motion descriptor

    NASA Astrophysics Data System (ADS)

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2015-03-01

    It is a challenging problem to realize a robust and low cost gaze estimation system. Existing appearance-based and feature-based methods both have achieved impressive progress in the past several years, while their improvements are still limited by feature representation. Therefore, in this paper, we propose a novel descriptor combining eye appearance and pupil center-cornea reflections (PCCR). The hybrid gaze descriptor represents eye structure from both feature level and topology level. At the feature level, a glints-centered appearance descriptor is presented to capture intensity and contour information of eye, and a polynomial representation of normalized PCCR vector is employed to capture motion information of eyeball. At the topology level, the partial least squares is applied for feature fusion and selection. At last, sparse representation based regression is employed to map the descriptor to the point-of-gaze (PoG). Experimental results show that the proposed method achieves high accuracy and has a good tolerance to head movements.

  14. On the tridimensional estimation of the gaze point by a stereoscopic wearable eye tracker.

    PubMed

    Lanata, Antonio; Greco, Alberto; Valenza, Gaetano; Scilingo, Enzo Pasquale

    2015-08-01

    This paper reports a novel stereo-vision-method (binocular system-geometrical mapped (BS-GM)) to estimate the depth coordinates of the eye gaze point in a controlled 3D space of vision. The method outcomes were compared in both 2D and 3D visual targets with both mono- and stereo-vision algorithms in order to estimate accuracy of results. More specifically, we compared BS-GM with a monocular method and with two stereo-vision methodologies which were different in order to the mapping functions. All of the methods were implemented in the same head mounted eye tracking system able to acquire both eyes. In 2D visual space (i.e. plane of vision) we compared BS-GM with a monocular method, a binocular system-linear mapped (BS-LM) and a binocular system-quadratic mapped (BS-QM). In the 3D space estimation all of the binocular systems were compared each other. Thirteen enrolled subjects observed 31 targets of known coordinates in a controlled environment. Results achieved on 2D comparison showed no statistical significant difference among the four methods, while the comparison on 3D space of vision showed that BS-GM method achieved a significant better accuracy than BS-LM and BS-QM method. Specifically, BS-GM showed and average percentage error obit of 3.47%. PMID:26736748

  15. SIFT algorithm-based 3D pose estimation of femur.

    PubMed

    Zhang, Xuehe; Zhu, Yanhe; Li, Changle; Zhao, Jie; Li, Ge

    2014-01-01

    To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D-3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning. PMID:25226990

  16. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  17. Estimation of Gaze Detection Accuracy Using the Calibration Information-Based Fuzzy System.

    PubMed

    Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is a camera-vision based technology for identifying the location where a user is looking. In general, a calibration process is applied at the initial stage of most gaze tracking systems. This process is necessary to calibrate for the differences in the eyeballs and cornea size of the user, as well as the angle kappa, and to find the relationship between the user's eye and screen coordinates. It is applied on the basis of the information of the user's pupil and corneal specular reflection obtained while the user is looking at several predetermined positions on a screen. In previous studies, user calibration was performed using various types of markers and marker display methods. However, studies on estimating the accuracy of gaze detection through the results obtained during the calibration process have yet to be carried out. Therefore, we propose the method for estimating the accuracy of a final gaze tracking system with a near-infrared (NIR) camera by using a fuzzy system based on the user calibration information. Here, the accuracy of the final gaze tracking system ensures the gaze detection accuracy during the testing stage of the gaze tracking system. Experiments were performed using a total of four types of markers and three types of marker display methods. From them, it was found that the proposed method correctly estimated the accuracy of the gaze tracking regardless of the various marker and marker display types applied. PMID:26742045

  18. Estimation of Gaze Detection Accuracy Using the Calibration Information-Based Fuzzy System

    PubMed Central

    Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is a camera-vision based technology for identifying the location where a user is looking. In general, a calibration process is applied at the initial stage of most gaze tracking systems. This process is necessary to calibrate for the differences in the eyeballs and cornea size of the user, as well as the angle kappa, and to find the relationship between the user’s eye and screen coordinates. It is applied on the basis of the information of the user’s pupil and corneal specular reflection obtained while the user is looking at several predetermined positions on a screen. In previous studies, user calibration was performed using various types of markers and marker display methods. However, studies on estimating the accuracy of gaze detection through the results obtained during the calibration process have yet to be carried out. Therefore, we propose the method for estimating the accuracy of a final gaze tracking system with a near-infrared (NIR) camera by using a fuzzy system based on the user calibration information. Here, the accuracy of the final gaze tracking system ensures the gaze detection accuracy during the testing stage of the gaze tracking system. Experiments were performed using a total of four types of markers and three types of marker display methods. From them, it was found that the proposed method correctly estimated the accuracy of the gaze tracking regardless of the various marker and marker display types applied. PMID:26742045

  19. Joint 3d Estimation of Vehicles and Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.; Geiger, A.

    2015-08-01

    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

  20. Coevrage Estimation of Geosensor in 3d Vector Environments

    NASA Astrophysics Data System (ADS)

    Afghantoloee, A.; Doodman, S.; Karimipour, F.; Mostafavi, M. A.

    2014-10-01

    Sensor deployment optimization to achieve the maximum spatial coverage is one of the main issues in Wireless geoSensor Networks (WSN). The model of the environment is an imperative parameter that influences the accuracy of geosensor coverage. In most of recent studies, the environment has been modeled by Digital Surface Model (DSM). However, the advances in technology to collect 3D vector data at different levels, especially in urban models can enhance the quality of geosensor deployment in order to achieve more accurate coverage estimations. This paper proposes an approach to calculate the geosensor coverage in 3D vector environments. The approach is applied on some case studies and compared with DSM based methods.

  1. Hand surface area estimation formula using 3D anthropometry.

    PubMed

    Hsu, Yao-Wen; Yu, Chi-Yuang

    2010-11-01

    Hand surface area is an important reference in occupational hygiene and many other applications. This study derives a formula for the palm surface area (PSA) and hand surface area (HSA) based on three-dimensional (3D) scan data. Two-hundred and seventy subjects, 135 males and 135 females, were recruited for this study. The hand was measured using a high-resolution 3D hand scanner. Precision and accuracy of the scanner is within 0.67%. Both the PSA and HSA were computed using the triangular mesh summation method. A comparison between this study and previous textbook values (such as in the U.K. teaching text and Lund and Browder chart discussed in the article) was performed first to show that previous textbooks overestimated the PSA by 12.0% and HSA by 8.7% (for the male, PSA 8.5% and HSA 4.7%, and for the female, PSA 16.2% and HSA 13.4%). Six 1D measurements were then extracted semiautomatically for use as candidate estimators for the PSA and HSA estimation formula. Stepwise regressions on these six 1D measurements and variable dependency test were performed. Results show that a pair of measurements (hand length and hand breadth) were able to account for 96% of the HSA variance and up to 98% of the PSA variance. A test of the gender-specific formula indicated that gender is not a significant factor in either the PSA or HSA estimation. PMID:20865628

  2. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  3. Influence of gaze elevation on estimating the possibility of passing under high obstacles during body tilt.

    PubMed

    Bourrelly, Aurore; Bringoux, Lionel; Vercher, Jean-Louis

    2009-02-01

    We investigated the influence of gaze elevation on judging the possibility of passing under high obstacles during pitch body tilts, while stationary, in absence of allocentric cues. Specifically, we aimed at studying the influence of egocentric references upon geocentric judgements. Seated subjects, orientated at various body orientations, were asked to perceptually estimate the possibility of passing under a projected horizontal line while keeping their gaze on a fixation target and imagining a horizontal body displacement. The results showed a global overestimation of the possibility of passing under the line, and confirmed the influence of body orientation reported by Bringoux et al. (Exp Brain Res 185(4):673-680, 2008). More strikingly, a linear influence of gaze elevation was found on perceptual estimates. Precisely, downward eye elevation yielded increased overestimations, and conversely upward gaze elevation yielded decreased overestimations. Furthermore, body and gaze orientation effects were independent and combined additively to yield a global egocentric influence with a weight of 45 and 54%, respectively. Overall, our data suggest that multiple egocentric references can jointly affect the estimated possibility of passing under high obstacles. These results are discussed in terms of "interpenetrability" between geocentric and egocentric reference frames and clearly demonstrate that gaze elevation is involved, as body orientation, in geocentric spatial localization. PMID:18925390

  4. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  5. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    SciTech Connect

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J; Thompson, Joseph W; Bolme, David S; Boehnen, Chris Bensing

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  6. Gaze estimation for off-angle iris recognition based on the biometric eye model

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Thompson, Joseph; Bolme, David; Boehnen, Christopher

    2013-05-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  7. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  8. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  9. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    SciTech Connect

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysis sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.

  10. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  11. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  12. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  13. Nonwearable gaze tracking system for controlling home appliance.

    PubMed

    Heo, Hwan; Lee, Jong Man; Jung, Dongwook; Lee, Ji Woo; Park, Kang Ryoung

    2014-01-01

    A novel gaze tracking system for controlling home appliances in 3D space is proposed in this study. Our research is novel in the following four ways. First, we propose a nonwearable gaze tracking system containing frontal viewing and eye tracking cameras. Second, our system includes three modes: navigation (for moving the wheelchair depending on the direction of gaze movement), selection (for selecting a specific appliance by gaze estimation), and manipulation (for controlling the selected appliance by gazing at the control panel). The modes can be changed by closing eyes during a specific time period or gazing. Third, in the navigation mode, the signal for moving the wheelchair can be triggered according to the direction of gaze movement. Fourth, after a specific home appliance is selected by gazing at it for more than predetermined time period, a control panel with 3 × 2 menu is displayed on laptop computer below the gaze tracking system for manipulation. The user gazes at one of the menu options for a specific time period, which can be manually adjusted according to the user, and the signal for controlling the home appliance can be triggered. The proposed method is shown to have high detection accuracy through a series of experiments. PMID:25298966

  14. Estimation of daily dietary fluoride intake: 3-d food diary v. 2-d duplicate plate.

    PubMed

    Omid, N; Maguire, A; O'Hare, W T; Zohoori, F V

    2015-12-28

    The 3-d food diary method (3-d FD) or the 2-d duplicate plate (2-d DP) method have been used to measure dietary fluoride (F) intake by many studies. This study aimed to compare daily dietary F intake (DDFI) estimated by the 3-d FD and 2-d DP methods at group and individual levels. Dietary data for sixty-one healthy children aged 4-6 years were collected using 3-d FD and 2-d DP methods with a 1-week gap between each collection. Food diary data were analysed for F using the Weighed Intake Analysis Software Package, whereas duplicate diets were analysed by an acid diffusion method using an F ion-selective electrode. Paired t test and linear regression were used to compare dietary data at the group and individual levels, respectively. At the group level, mean DDFI was 0·025 (sd 0·016) and 0·028 (sd 0·013) mg/kg body weight (bw) per d estimated by 3-d FD and 2-d DP, respectively. No statistically significant difference (P=0·10) was observed in estimated DDFI by each method at the group level. At an individual level, the agreement in estimating F intake (mg/kg bw per d) using the 3-d FD method compared with the 2-d DP method was within ±0·011 (95 % CI 0·009, 0·013) mg/kg bw per d. At the group level, DDFI data obtained by either the 2-d DP method or the 3-d FD method can be replaced. At an individual level, the typical error and the narrow margin between optimal and excessive F intake suggested that the DDFI data obtained by one method cannot replace the dietary data estimated from the other method. PMID:26568435

  15. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  16. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  17. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  18. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  19. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    PubMed

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  20. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  1. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  2. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  3. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  4. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  5. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  6. Rate-constrained 3D surface estimation from noise-corrupted multiview depth videos.

    PubMed

    Sun, Wenxiu; Cheung, Gene; Chou, Philip A; Florencio, Dinei; Zhang, Cha; Au, Oscar C

    2014-07-01

    Transmitting compactly represented geometry of a dynamic 3D scene from a sender can enable a multitude of imaging functionalities at a receiver, such as synthesis of virtual images at freely chosen viewpoints via depth-image-based rendering. While depth maps—projections of 3D geometry onto 2D image planes at chosen camera viewpoints-can nowadays be readily captured by inexpensive depth sensors, they are often corrupted by non-negligible acquisition noise. Given depth maps need to be denoised and compressed at the encoder for efficient network transmission to the decoder, in this paper, we consider the denoising and compression problems jointly, arguing that doing so will result in a better overall performance than the alternative of solving the two problems separately in two stages. Specifically, we formulate a rate-constrained estimation problem, where given a set of observed noise-corrupted depth maps, the most probable (maximum a posteriori (MAP)) 3D surface is sought within a search space of surfaces with representation size no larger than a prespecified rate constraint. Our rate-constrained MAP solution reduces to the conventional unconstrained MAP 3D surface reconstruction solution if the rate constraint is loose. To solve our posed rate-constrained estimation problem, we propose an iterative algorithm, where in each iteration the structure (object boundaries) and the texture (surfaces within the object boundaries) of the depth maps are optimized alternately. Using the MVC codec for compression of multiview depth video and MPEG free viewpoint video sequences as input, experimental results show that rate-constrained estimated 3D surfaces computed by our algorithm can reduce coding rate of depth maps by up to 32% compared with unconstrained estimated surfaces for the same quality of synthesized virtual views at the decoder. PMID:24876124

  7. Robust gaze-tracking method by using frontal-viewing and eye-tracking cameras

    NASA Astrophysics Data System (ADS)

    Cho, Chul Woo; Lee, Ji Woo; Lee, Eui Chul; Park, Kang Ryoung

    2009-12-01

    Gaze-tracking technology is used to obtain the position of a user's viewpoint and a new gaze-tracking method is proposed based on a wearable goggle-type device, which includes an eye-tracking camera and a frontal viewing camera. The proposed method is novel in five ways compared to previous research. First, it can track the user's gazing position, allowing for the natural facial and eye movements by using frontal viewing and an eye-tracking camera. Second, an eye gaze position is calculated using a geometric transform, based on the mapping function among three rectangular regions. These are a rectangular region defined by the four pupil centers detected when a user gazes at the four corners of a monitor, a distorted monitor region observed by the frontal viewing camera, and an actual monitor region, respectively. Third, a facial gaze position is estimated based on the geometric center and the four internal angles of the monitor region detected by the frontal viewing camera. Fourth, a final gaze position is obtained by using the weighted summation of the eye and the facial gazing positions. Fifth, since a simple 2-D method is used to obtain the gazing position instead of a complicated 3-D method, the proposed method can be operated at real-time speeds. Experimental results show that the root mean square (rms) error of gaze estimation is less than 1 deg.

  8. 3D position estimation using an artificial neural network for a continuous scintillator PET detector

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhu, W.; Cheng, X.; Li, D.

    2013-03-01

    Continuous crystal based PET detectors have features of simple design, low cost, good energy resolution and high detection efficiency. Through single-end readout of scintillation light, direct three-dimensional (3D) position estimation could be another advantage that the continuous crystal detector would have. In this paper, we propose to use artificial neural networks to simultaneously estimate the plane coordinate and DOI coordinate of incident γ photons with detected scintillation light. Using our experimental setup with an ‘8 + 8’ simplified signal readout scheme, the training data of perpendicular irradiation on the front surface and one side surface are obtained, and the plane (x, y) networks and DOI networks are trained and evaluated. The test results show that the artificial neural network for DOI estimation is as effective as for plane estimation. The performance of both estimators is presented by resolution and bias. Without bias correction, the resolution of the plane estimator is on average better than 2 mm and that of the DOI estimator is about 2 mm over the whole area of the detector. With bias correction, the resolution at the edge area for plane estimation or at the end of the block away from the readout PMT for DOI estimation becomes worse, as we expect. The comprehensive performance of the 3D positioning by a neural network is accessed by the experimental test data of oblique irradiations. To show the combined effect of the 3D positioning over the whole area of the detector, the 2D flood images of oblique irradiation are presented with and without bias correction.

  9. Effects of scatter on model parameter estimates in 3D PET studies of the human brain

    SciTech Connect

    Cherry, S.R.; Huang, S.C.

    1995-08-01

    Phantom measurements and simulated data were used to characterize the effects of scatter on 3D PET projection data, reconstructed images and model parameter estimates. Scatter distributions were estimated form studies of the 3D Hoffman brain phantom by the 2D/3D difference method. The total scatter fraction in the projection data was 40%, but reduces to 27% when only those counts within the boundary of the brain are considered. After reconstruction, the whole brain scatter fraction is 20%, averaging 10% in cortical gray matter, 21% in basal ganglia and 40% in white matter. The scatter contribution varies by almost a factor of two from the edge to the center of the brain due to the shape of the scatter distribution and the effects of attenuation correction. The effect of scatter on estimates of cerebral metabolic rate for glucose (CMRGI) and cerebral blood flow (CBF) is evaluated by simulating typical gray matter time activity curves (TAC`s) and adding a scatter component based on whole-brain activity. Both CMRGI and CBF change in a linear fashion with scatter fraction. Efforts of between 10 and 30% will typically result if 3D studies are not corrected for scatter. The authors also present results from a simple and fast scatter correction which fits a gaussian function to the scattered events outside the brain. This reduced the scatter fraction to <2% in a range of phantom studies with different activity distributions. Using this correction, quantitative errors in 3D PET studies of CMRGI and CBF can be reduced to well below 10%.

  10. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  11. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method. PMID:21652284

  12. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  13. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  14. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  15. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  16. Detecting and estimating errors in 3D restoration methods using analog models.

    NASA Astrophysics Data System (ADS)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  17. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  18. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  19. Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.

    PubMed

    Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide

    2003-03-15

    Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks. PMID:12680675

  20. 3D visualization and biovolume estimation of motile cells by digital holography

    NASA Astrophysics Data System (ADS)

    Merola, F.; Miccio, L.; Memmolo, P.; Di Caprio, G.; Coppola, G.; Netti, P.

    2014-05-01

    For the monitoring of biological samples, physical parameters such as size, shape and refractive index are of crucial importance. However, up to now the morphological in-vitro analysis of in-vitro cells has been limited to 2D analysis by classical optical microscopy such as phase-contrast or DIC. Here we show an approach that exploits the capability of optical tweezers to trap and put in self-rotation bovine spermatozoa flowing into a microfluidic channel. At same time, digital holographic microscopy allows to image the cell in phase-contrast modality for each different angular position, during the rotation. From the collected information about the cell's phase-contrast signature, we demonstrate that it is possible to reconstruct the 3D shape of the cell and estimate its volume. The method can open new pathways for rapid measurement of in-vitro cells volume in microfluidic lab-on-a-chip platform, thus having access to 3D shape of the object avoiding tomography microscopy, that is an overwhelmed and very complex approach for measuring 3D shape and biovolume estimation.

  1. Parametric estimation of 3D tubular structures for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Anderson, Pamela G.; Rosenberg, Elizabeth; Kilmer, Misha E.; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L.

    2013-01-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction. PMID:23411913

  2. 3D Porosity Estimation of the Nankai Trough Sediments from Core-log-seismic Integration

    NASA Astrophysics Data System (ADS)

    Park, J. O.

    2015-12-01

    The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake fault. Historic, great megathrust earthquakes with a recurrence interval of 100-200 yr have generated strong motion and large tsunamis along the Nankai Trough subduction zone. At the Nankai Trough margin, the Philippine Sea Plate (PSP) is being subducted beneath the Eurasian Plate to the northwest at a convergence rate ~4 cm/yr. The Shikoku Basin, the northern part of the PSP, is estimated to have opened between 25 and 15 Ma by backarc spreading of the Izu-Bonin arc. The >100-km-wide Nankai accretionary wedge, which has developed landward of the trench since the Miocene, mainly consists of offscraped and underplated materials from the trough-fill turbidites and the Shikoku Basin hemipelagic sediments. Particularly, physical properties of the incoming hemipelagic sediments may be critical for seismogenic behavior of the megathrust fault. We have carried out core-log-seismic integration (CLSI) to estimate 3D acoustic impedance and porosity for the incoming sediments in the Nankai Trough. For the CLSI, we used 3D seismic reflection data, P-wave velocity and density data obtained during IODP (Integrated Ocean Drilling Program) Expeditions 322 and 333. We computed acoustic impedance depth profiles for the IODP drilling sites from P-wave velocity and density data. We constructed seismic convolution models with the acoustic impedance profiles and a source wavelet which is extracted from the seismic data, adjusting the seismic models to observed seismic traces with inversion method. As a result, we obtained 3D acoustic impedance volume and then converted it to 3D porosity volume. In general, the 3D porosities show decrease with depth. We found a porosity anomaly zone with alteration of high and low porosities seaward of the trough axis. In this talk, we will show detailed 3D porosity of the incoming sediments, and present implications of the porosity anomaly zone for the

  3. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  4. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  5. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  6. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  7. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy. PMID:27362636

  8. An Investigation on the Feasibility of Uncalibrated and Unconstrained Gaze Tracking for Human Assistive Applications by Using Head Pose Estimation

    PubMed Central

    Cazzato, Dario; Leo, Marco; Distante, Cosimo

    2014-01-01

    This paper investigates the possibility of accurately detecting and tracking human gaze by using an unconstrained and noninvasive approach based on the head pose information extracted by an RGB-D device. The main advantages of the proposed solution are that it can operate in a totally unconstrained environment, it does not require any initial calibration and it can work in real-time. These features make it suitable for being used to assist human in everyday life (e.g., remote device control) or in specific actions (e.g., rehabilitation), and in general in all those applications where it is not possible to ask for user cooperation (e.g., when users with neurological impairments are involved). To evaluate gaze estimation accuracy, the proposed approach has been largely tested and results are then compared with the leading methods in the state of the art, which, in general, make use of strong constraints on the people movements, invasive/additional hardware and supervised pattern recognition modules. Experimental tests demonstrated that, in most cases, the errors in gaze estimation are comparable to the state of the art methods, although it works without additional constraints, calibration and supervised learning. PMID:24824369

  9. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan

    2016-04-01

    Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.

  10. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  11. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  12. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  13. 3D global estimation and augmented reality visualization of intra-operative X-ray dose.

    PubMed

    Rodas, Nicolas Loy; Padoy, Nicolas

    2014-01-01

    The growing use of image-guided minimally-invasive surgical procedures is confronting clinicians and surgical staff with new radiation exposure risks from X-ray imaging devices. The accurate estimation of intra-operative radiation exposure can increase staff awareness of radiation exposure risks and enable the implementation of well-adapted safety measures. The current surgical practice of wearing a single dosimeter at chest level to measure radiation exposure does not provide a sufficiently accurate estimation of radiation absorption throughout the body. In this paper, we propose an approach that combines data from wireless dosimeters with the simulation of radiation propagation in order to provide a global radiation risk map in the area near the X-ray device. We use a multi-camera RGBD system to obtain a 3D point cloud reconstruction of the room. The positions of the table, C-arm and clinician are then used 1) to simulate the propagation of radiation in a real-world setup and 2) to overlay the resulting 3D risk-map onto the scene in an augmented reality manner. By using real-time wireless dosimeters in our system, we can both calibrate the simulation and validate its accuracy at specific locations in real-time. We demonstrate our system in an operating room equipped with a robotised X-ray imaging device and validate the radiation simulation on several X-ray acquisition setups. PMID:25333145

  14. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  15. Automated Segmentation of the Right Ventricle in 3D Echocardiography: A Kalman Filter State Estimation Approach.

    PubMed

    Bersvendsen, Jorn; Orderud, Fredrik; Massey, Richard John; Fosså, Kristian; Gerard, Olivier; Urheim, Stig; Samset, Eigil

    2016-01-01

    As the right ventricle's (RV) role in cardiovascular diseases is being more widely recognized, interest in RV imaging, function and quantification is growing. However, there are currently few RV quantification methods for 3D echocardiography presented in the literature or commercially available. In this paper we propose an automated RV segmentation method for 3D echocardiographic images. We represent the RV geometry by a Doo-Sabin subdivision surface with deformation modes derived from a training set of manual segmentations. The segmentation is then represented as a state estimation problem and solved with an extended Kalman filter by combining the RV geometry with a motion model and edge detection. Validation was performed by comparing surface-surface distances, volumes and ejection fractions in 17 patients with aortic insufficiency between the proposed method, magnetic resonance imaging (MRI), and a manual echocardiographic reference. The algorithm was efficient with a mean computation time of 2.0 s. The mean absolute distances between the proposed and manual segmentations were 3.6 ± 0.7 mm. Good agreements of end diastolic volume, end systolic volume and ejection fraction with respect to MRI ( -26±24 mL , -16±26 mL and 0 ± 10%, respectively) and a manual echocardiographic reference (7 ± 30 mL, 13 ± 17 mL and -5±7% , respectively) were observed. PMID:26168434

  16. Estimation of foot pressure from human footprint depths using 3D scanner

    NASA Astrophysics Data System (ADS)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  17. Joint azimuth and elevation localization estimates in 3D synthetic aperture radar scenarios

    NASA Astrophysics Data System (ADS)

    Pepin, Matthew

    2015-05-01

    The location of point scatterers in Synthetic Aperture Radar (SAR) data is exploited in several modern analyzes including persistent scatter tracking, terrain deformation, and object identification. The changes in scatterers over time (pulse-to-pulse including vibration and movement, or pass-to-pass including direct follow on, time of day, and season), can be used to draw more information about the data collection. Multiple pass and multiple antenna SAR scenarios have extended these analyzes to location in three dimensions. Either multiple passes at different elevation angles may be .own or an antenna array with an elevation baseline performs a single pass. Parametric spectral estimation in each dimension allows sub-pixel localization of point scatterers in some cases additionally exploiting the multiple samples in each cross dimension. The accuracy of parametric estimation is increased when several azimuth passes or elevations (snapshots) are summed to mitigate measurement noise. Inherent range curvature across the aperture however limits the accuracy in the range dimension to that attained from a single pulse. Unlike the stationary case where radar returns may be averaged the movement necessary to create the synthetic aperture is only approximately (to pixel level accuracy) removed to form SAR images. In parametric estimation increased accuracy is attained when two dimensions are used to jointly estimate locations. This paper involves jointly estimating azimuth and elevation to attain increased accuracy 3D location estimates. In this way the full 2D array of azimuth and elevation samples is used to obtain the maximum possible accuracy. In addition the independent dimension collection geometry requires choosing which dimension azimuth or elevation attains the highest accuracy while joint estimation increases accuracy in both dimensions. When maximum parametric estimation accuracy in azimuth is selected the standard interferometric SAR scenario results. When

  18. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  19. 3D viscosity maps for Greenland and effect on GRACE mass balance estimates

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Xu, Zheng

    2016-04-01

    The GRACE satellite mission measures mass loss of the Greenland ice sheet. To correct for glacial isostatic adjustment numerical models are used. Although generally found to be a small signal, the full range of possible GIA models has not been explored yet. In particular, low viscosities due to a wet mantle and high temperatures due to the nearby Iceland hotspot could have a significant effect on GIA gravity rates. The goal of this study is to present a range of possible viscosity maps, and investigate the effect on GRACE mass balance estimates. Viscosity is derived using flow laws for olivine. Mantle temperature is computed from global seismology models, based on temperature derivatives for different mantle compositions. An indication for grain sizes is obtained by xenolith findings at a few locations. We also investigate the weakening effect of the presence of melt. To calculate gravity rates, we use a finite-element GIA model with the 3D viscosity maps and the ICE-5G loading history. GRACE mass balances for mascons in Greenland are derived with a least-squares inversion, using separate constraints for the inland and coastal areas in Greenland. Biases in the least-squares inversion are corrected using scale factors estimated from a simulation based on a surface mass balance model (Xu et al., submitted to The Cryosphere). Model results show enhanced gravity rates in the west and south of Greenland with 3D viscosity maps, compared to GIA models with 1D viscosity. The effect on regional mass balance is up to 5 Gt/year. Regional low viscosity can make present-day gravity rates sensitivity to ice thickness changes in the last decades. Therefore, an improved ice loading history for these time scales is needed.

  20. A hierarchical Bayesian approach for earthquake location and data uncertainty estimation in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Arroucau, Pierre; Custódio, Susana

    2015-04-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  1. A Hierarchical Bayesian Approcah for Earthquake Location and Data Uncertainty Estimation in 3D Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Custodio, S.

    2014-12-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  2. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    reliable results and resolution. Based on the sediment layers of the peat bog together with the generated 3D surface model the paleoenvironment, the largest paleowater level can be reconstructed and we can estimate the dimension of the landslide which created the basin of the peat bog.

  3. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands

    PubMed Central

    Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region’s population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  4. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands.

    PubMed

    Biljecki, Filip; Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region's population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  5. 3D pore-network analysis and permeability estimation of deformation bands hosted in carbonate grainstones.

    NASA Astrophysics Data System (ADS)

    Zambrano, Miller; Tondi, Emanuele; Mancini, Lucia; Trias, F. Xavier; Arzilli, Fabio; Lanzafame, Gabriele; Aibibula, Nijiati

    2016-04-01

    In porous rocks strain is commonly localized in narrow Deformation Bands (DBs), where the petrophysical properties are significantly modified with respect the pristine rock. As a consequence, DBs could have an important effect on production and development of porous reservoirs representing baffles zones or, in some cases, contribute to reservoir compartmentalization. Taking in consideration that the decrease of permeability within DBs is related to changes in the porous network properties (porosity, connectivity) and the pores morphology (size distribution, specific surface area), an accurate porous network characterization is useful for understanding both the effect of deformation banding on the porous network and their influence upon fluid flow through the deformed rocks. In this work, a 3D characterization of the microstructure and texture of DBs hosted in porous carbonate grainstones was obtained at the Elettra laboratory (Trieste, Italy) by using two different techniques: phase-contrast synchrotron radiation computed microtomography (micro-CT) and microfocus X-ray micro-CT. These techniques are suitable for addressing quantitative analysis of the porous network and implementing Computer Fluid Dynamics (CFD)experiments in porous rocks. Evaluated samples correspond to grainstones highly affected by DBs exposed in San Vito Lo Capo peninsula (Sicily, Italy), Favignana Island (Sicily, Italy) and Majella Mountain (Abruzzo, Italy). For the analysis, the data were segmented in two main components porous and solid phases. The properties of interest are porosity, connectivity, a grain and/or porous textural properties, in order to differentiate host rock and DBs in different zones. Permeability of DB and surrounding host rock were estimated by the implementation of CFD experiments, permeability results are validated by comparing with in situ measurements. In agreement with previous studies, the 3D image analysis and flow simulation indicate that DBs could be constitute

  6. Scoliosis corrective force estimation from the implanted rod deformation using 3D-FEM analysis

    PubMed Central

    2015-01-01

    Background Improvement of material property in spinal instrumentation has brought better deformity correction in scoliosis surgery in recent years. The increase of mechanical strength in instruments directly means the increase of force, which acts on bone-implant interface during scoliosis surgery. However, the actual correction force during the correction maneuver and safety margin of pull out force on each screw were not well known. In the present study, estimated corrective forces and pull out forces were analyzed using a novel method based on Finite Element Analysis (FEA). Methods Twenty adolescent idiopathic scoliosis patients (1 boy and 19 girls) who underwent reconstructive scoliosis surgery between June 2009 and Jun 2011 were included in this study. Scoliosis correction was performed with 6mm diameter titanium rod (Ti6Al7Nb) using the simultaneous double rod rotation technique (SDRRT) in all cases. The pre-maneuver and post-maneuver rod geometry was collected from intraoperative tracing and postoperative 3D-CT images, and 3D-FEA was performed with ANSYS. Cobb angle of major curve, correction rate and thoracic kyphosis were measured on X-ray images. Results Average age at surgery was 14.8, and average fusion length was 8.9 segments. Major curve was corrected from 63.1 to 18.1 degrees in average and correction rate was 71.4%. Rod geometry showed significant change on the concave side. Curvature of the rod on concave and convex sides decreased from 33.6 to 17.8 degrees, and from 25.9 to 23.8 degrees, respectively. Estimated pull out forces at apical vertebrae were 160.0N in the concave side screw and 35.6N in the convex side screw. Estimated push in force at LIV and UIV were 305.1N in the concave side screw and 86.4N in the convex side screw. Conclusions Corrective force during scoliosis surgery was demonstrated to be about four times greater in the concave side than in convex side. Averaged pull out and push in force fell below previously reported safety

  7. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  8. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    PubMed Central

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-01-01

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910

  9. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    PubMed

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms. PMID:26186775

  10. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  11. Estimation of Hydraulic Fracturing in the Earth Fill Dam by 3-D Analysis

    NASA Astrophysics Data System (ADS)

    Nishimura, Shin-Ichi

    It is necessary to calculate strength and strain for estimation of hydraulic fracturing in the earth fill dam, and to which the FEM is effective. 2-D analysis can produce good results to some extent if an embankment is linear and the plain strain condition can be set to the cross section. However, there may be some conditions not possible to express in the 2-D plain because the actual embankment of agricultural reservoirs is formed by straight and curved lines. Moreover, it may not be possible to precisely calculate strain in the direction of dam axis because the 2-D analysis in the cross section cannot take the shape in the vertical section into consideration. Therefore, we performed 3-D built up analysis targeting the actually-leaked agricultural reservoir to examine hazards of hydraulic fracturing based on the shape of an embankment and by rapid impoundment of water. It resulted in the occurrence of hydraulic fracturing to develop by water pressure due to the vertical cracks caused by tensile strain in the valley and refractive section of the foundation.

  12. Angle Estimation of Simultaneous Orthogonal Rotations from 3D Gyroscope Measurements

    PubMed Central

    Stančin, Sara; Tomažič, Sašo

    2011-01-01

    A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation. PMID:22164090

  13. Edge preserving motion estimation with occlusions correction for assisted 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Pohl, Petr; Sirotenko, Michael; Tolstaya, Ekaterina; Bucha, Victor

    2014-02-01

    In this article we propose high quality motion estimation based on variational optical flow formulation with non-local regularization term. To improve motion in occlusion areas we introduce occlusion motion inpainting based on 3-frame motion clustering. Variational formulation of optical flow proved itself to be very successful, however a global optimization of cost function can be time consuming. To achieve acceptable computation times we adapted the algorithm that optimizes convex function in coarse-to-fine pyramid strategy and is suitable for modern GPU hardware implementation. We also introduced two simplifications of cost function that significantly decrease computation time with acceptable decrease of quality. For motion clustering based motion inpaitning in occlusion areas we introduce effective method of occlusion aware joint 3-frame motion clustering using RANSAC algorithm. Occlusion areas are inpainted by motion model taken from cluster that shows consistency in opposite direction. We tested our algorithm on Middlebury optical flow benchmark, where we scored around 20th position, but being one of the fastest method near the top. We also successfully used this algorithm in semi-automatic 2D to 3D conversion tool for spatio-temporal background inpainting, automatic adaptive key frame detection and key points tracking.

  14. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage. PMID:26672045

  15. Digital holography as a method for 3D imaging and estimating the biovolume of motile cells.

    PubMed

    Merola, F; Miccio, L; Memmolo, P; Di Caprio, G; Galli, A; Puglisi, R; Balduzzi, D; Coppola, G; Netti, P; Ferraro, P

    2013-12-01

    Sperm morphology is regarded as a significant prognostic factor for fertilization, as abnormal sperm structure is one of the most common factors in male infertility. Furthermore, obtaining accurate morphological information is an important issue with strong implications in zoo-technical industries, for example to perform sorting of species X from species Y. A challenging step forward would be the availability of a fast, high-throughput and label-free system for the measurement of physical parameters and visualization of the 3D shape of such biological specimens. Here we show a quantitative imaging approach to estimate simply and quickly the biovolume of sperm cells, combining the optical tweezers technique with digital holography, in a single and integrated set-up for a biotechnology assay process on the lab-on-a-chip scale. This approach can open the way for fast and high-throughput analysis in label-free microfluidic based "cytofluorimeters" and prognostic examination based on sperm morphology, thus allowing advancements in reproductive science. PMID:24129638

  16. Head pose estimation from a 2D face image using 3D face morphing with depth parameters.

    PubMed

    Kong, Seong G; Mbouna, Ralph Oyini

    2015-06-01

    This paper presents estimation of head pose angles from a single 2D face image using a 3D face model morphed from a reference face model. A reference model refers to a 3D face of a person of the same ethnicity and gender as the query subject. The proposed scheme minimizes the disparity between the two sets of prominent facial features on the query face image and the corresponding points on the 3D face model to estimate the head pose angles. The 3D face model used is morphed from a reference model to be more specific to the query face in terms of the depth error at the feature points. The morphing process produces a 3D face model more specific to the query image when multiple 2D face images of the query subject are available for training. The proposed morphing process is computationally efficient since the depth of a 3D face model is adjusted by a scalar depth parameter at feature points. Optimal depth parameters are found by minimizing the disparity between the 2D features of the query face image and the corresponding features on the morphed 3D model projected onto 2D space. The proposed head pose estimation technique was evaluated on two benchmarking databases: 1) the USF Human-ID database for depth estimation and 2) the Pointing'04 database for head pose estimation. Experiment results demonstrate that head pose estimation errors in nodding and shaking angles are as low as 7.93° and 4.65° on average for a single 2D input face image. PMID:25706638

  17. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  18. The spatial accuracy of cellular dose estimates obtained from 3D reconstructed serial tissue autoradiographs.

    PubMed

    Humm, J L; Macklis, R M; Lu, X Q; Yang, Y; Bump, K; Beresford, B; Chin, L M

    1995-01-01

    In order to better predict and understand the effects of radiopharmaceuticals used for therapy, it is necessary to determine more accurately the radiation absorbed dose to cells in tissue. Using thin-section autoradiography, the spatial distribution of sources relative to the cells can be obtained from a single section with micrometre resolution. By collecting and analysing serial sections, the 3D microscopic distribution of radionuclide relative to the cellular histology, and therefore the dose rate distribution, can be established. In this paper, a method of 3D reconstruction of serial sections is proposed, and measurements are reported of (i) the accuracy and reproducibility of quantitative autoradiography and (ii) the spatial precision with which tissue features from one section can be related to adjacent sections. Uncertainties in the activity determination for the specimen result from activity losses during tissue processing (4-11%), and the variation of grain count per unit activity between batches of serial sections (6-25%). Correlation of the section activity to grain count densities showed deviations ranging from 6-34%. The spatial alignment uncertainties were assessed using nylon fibre fiduciary markers incorporated into the tissue block, and compared to those for alignment based on internal tissue landmarks. The standard deviation for the variation in nylon fibre fiduciary alignment was measured to be 41 microns cm-1, compared to 69 microns cm-1 when internal tissue histology landmarks were used. In addition, tissue shrinkage during histological processing of up to 10% was observed. The implications of these measured activity and spatial distribution uncertainties upon the estimate of cellular dose rate distribution depends upon the range of the radiation emissions. For long-range beta particles, uncertainties in both the activity and spatial distribution translate linearly to the uncertainty in dose rate of < 15%. For short-range emitters (< 100

  19. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  20. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  1. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  2. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies

    NASA Astrophysics Data System (ADS)

    Wake, Kanako; Varsier, Nadège; Watanabe, Soichi; Taki, Masao; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-10-01

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  3. Coupling the 3D hydro-morphodynamic model Telemac-3D-sisyphe and seismic measurements to estimate bedload transport rates in a small gravel-bed river.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Krein, Andreas; Barrière, Julien

    2014-05-01

    Coupling the 3D hydro-morphodynamic model Telemac-3D-sisyphe and seismic measurements to estimate bedload transport rates in a small gravel-bed river. Renaud Hostache, Andreas Krein, Julien Barrière During flood events, amounts of river bed material are transported via bedload. This causes problems, like the silting of reservoirs or the disturbance of biological habitats. Some current bedload measuring techniques have limited possibilities for studies in high temporal resolutions. Optical systems are usually not applicable because of high turbidity due to concentrated suspended sediment transported. Sediment traps or bedload samplers yield only summative information on bedload transport with low temporal resolution. An alternative bedload measuring technique is the use of seismological systems installed next to the rivers. The potential advantages are observations in real time and under undisturbed conditions. The study area is a 120 m long reach of River Colpach (21.5 km2), a small gravel bed river in Northern Luxembourg. A combined approach of hydro-climatological observations, hydraulic measurements, sediment sampling, and seismological measurements is used in order to investigate bedload transport phenomena. Information derived from seismic measurements and results from a 3-dimensional hydro-morphodynamic model are exemplarily discussed for a November 2013 flood event. The 3-dimensional hydro-morphodynamic model is based on the Telemac hydroinformatic system. This allows for dynamically coupling a 3D hydrodynamic model (Telemac-3D) and a morphodynamic model (Sisyphe). The coupling is dynamic as these models exchange their information during simulations. This is a main advantage as it allows for taking into account the effects of the morphologic changes of the riverbed on the water hydrodynamic and the bedload processes. The coupled model has been calibrated using time series of gauged water depths and time series of bed material collected sequentially (after

  4. Estimation of uncertainties in geological 3D raster layer models as integral part of modelling procedures

    NASA Astrophysics Data System (ADS)

    Maljers, Denise; den Dulk, Maryke; ten Veen, Johan; Hummelman, Jan; Gunnink, Jan; van Gessel, Serge

    2016-04-01

    The Geological Survey of the Netherlands (GSN) develops and maintains subsurface models with regional to national coverage. These models are paramount for petroleum exploration in conventional reservoirs, for understanding the distribution of unconventional reservoirs, for mapping geothermal aquifers, for the potential to store carbon, or for groundwater- or aggregate resources. Depending on the application domain these models differ in depth range, scale, data used, modelling software and modelling technique. Depth uncertainty information is available for the Geological Survey's 3D raster layer models DGM Deep and DGM Shallow. These models cover different depth intervals and are constructed using different data types and different modelling software. Quantifying the uncertainty of geological models that are constructed using multiple data types as well as geological expert-knowledge is not straightforward. Examples of geological expert-knowledge are trend surfaces displaying the regional thickness trends of basin fills or steering points that are used to guide the pinching out of geological formations or the modelling of the complex stratal geometries associated with saltdomes and saltridges. This added a-priori knowledge, combined with the assumptions underlying kriging (normality and second-order stationarity), makes the kriging standard error an incorrect measure of uncertainty for our geological models. Therefore the methods described below were developed. For the DGM Deep model a workflow has been developed to assess uncertainty by combining precision (giving information on the reproducibility of the model results) and accuracy (reflecting the proximity of estimates to the true value). This was achieved by centering the resulting standard deviations around well-tied depths surfaces. The standard deviations are subsequently modified by three other possible error sources: data error, structural complexity and velocity model error. The uncertainty workflow

  5. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  6. Video reframing relying on panoramic estimation based on a 3D representation of the scene

    NASA Astrophysics Data System (ADS)

    de Simon, Agnes; Figue, Jean; Nicolas, Henri

    2000-05-01

    This paper describes a new method for creating mosaic images from an original video and for computing a new sequence modifying some camera parameters like image size, scale factor, view angle... A mosaic image is a representation of the full scene observed by a moving camera during its displacement. It provides a wide angle of view of the scene from a sequence of images shot with a narrow angle of view camera. This paper proposes a method to create a virtual sequence from a calibrated original video and a rough 3D model of the scene. A 3D relationship between original and virtual images gives pixel correspondent in different images for a same 3D point in model scene. To texture the model with natural textures obtained in the original sequence, a criterion based on constraints related to the temporal variations of the background and 3D geometric considerations is used. Finally, in the presented method, the textured 3D model is used to recompute a new sequence of image with possibly different point of view and camera aperture angle. The algorithm is being proven with virtual sequences and, obtained results are encouraging up to now.

  7. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  8. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  9. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  10. Precision estimation and imaging of normal and shear components of the 3D strain tensor in elastography.

    PubMed

    Konofagou, E E; Ophir, J

    2000-06-01

    In elastography we have previously developed a tracking and correction method that estimates the axial and lateral strain components along and perpendicular to the compressor/scanning axis following an externally applied compression. However, the resulting motion is a three-dimensional problem. Therefore, in order to fully describe this motion we need to consider a 3D model and estimate all three principal strain components, i.e. axial, lateral and elevational (out-of-plane), for a full 3D tensor description. Since motion is coupled in all three dimensions, the three motion components have to be decoupled prior to their estimation. In this paper, we describe a method that estimates and corrects motion in three dimensions, which is an extension of the 2D motion tracking and correction method discussed before. In a similar way as in the 2D motion estimation, and by assuming that ultrasonic frames are available in more than one parallel elevational plane, we used methods of interpolation and cross-correlation between elevationally displaced RF echo segments to estimate the elevational displacement and strain. In addition, the axial, lateral and elevational displacements were used to estimate all three shear strain components that, together with the normal strain estimates, fully describe the full 3D normal strain tensor resulting from the uniform compression. Results of this method from three-dimensional finite-element simulations are shown. PMID:10870710

  11. A hybrid antenna array design for 3-d direction of arrival estimation.

    PubMed

    Saqib, Najam-Us; Khan, Imdad

    2015-01-01

    A 3-D beam scanning antenna array design is proposed that gives a whole 3-D spherical coverage and also suitable for various radar and body-worn devices in the Body Area Networks applications. The Array Factor (AF) of the proposed antenna is derived and its various parameters like directivity, Half Power Beam Width (HPBW) and Side Lobe Level (SLL) are calculated by varying the size of the proposed antenna array. Simulations were carried out in MATLAB 2012b. The radiators are considered isotropic and hence mutual coupling effects are ignored. The proposed array shows a considerable improvement against the existing cylindrical and coaxial cylindrical arrays in terms of 3-D scanning, size, directivity, HPBW and SLL. PMID:25790103

  12. Estimation of Atmospheric Methane Surface Fluxes Using a Global 3-D Chemical Transport Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Prinn, R.

    2003-12-01

    Accurate determination of atmospheric methane surface fluxes is an important and challenging problem in global biogeochemical cycles. We use inverse modeling to estimate annual, seasonal, and interannual CH4 fluxes between 1996 and 2001. The fluxes include 7 time-varying seasonal (3 wetland, rice, and 3 biomass burning) and 3 steady aseasonal (animals/waste, coal, and gas) global processes. To simulate atmospheric methane, we use the 3-D chemical transport model MATCH driven by NCEP reanalyzed observed winds at a resolution of T42 ( ˜2.8° x 2.8° ) in the horizontal and 28 levels (1000 - 3 mb) in the vertical. By combining existing datasets of individual processes, we construct a reference emissions field that represents our prior guess of the total CH4 surface flux. For the methane sink, we use a prescribed, annually-repeating OH field scaled to fit methyl chloroform observations. MATCH is used to produce both the reference run from the reference emissions, and the time-dependent sensitivities that relate individual emission processes to observations. The observational data include CH4 time-series from ˜15 high-frequency (in-situ) and ˜50 low-frequency (flask) observing sites. Most of the high-frequency data, at a time resolution of 40-60 minutes, have not previously been used in global scale inversions. In the inversion, the high-frequency data generally have greater weight than the weekly flask data because they better define the observational monthly means. The Kalman Filter is used as the optimal inversion technique to solve for emissions between 1996-2001. At each step in the inversion, new monthly observations are utilized and new emissions estimates are produced. The optimized emissions represent deviations from the reference emissions that lead to a better fit to the observations. The seasonal processes are optimized for each month, and contain the methane seasonality and interannual variability. The aseasonal processes, which are less variable, are

  13. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  14. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    PubMed

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  15. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging

    PubMed Central

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  16. Estimating Hydraulic Conductivities in a Fractured Shale Formation from Pressure Pulse Testing and 3d Modeling

    NASA Astrophysics Data System (ADS)

    Courbet, C.; DICK, P.; Lefevre, M.; Wittebroodt, C.; Matray, J.; Barnichon, J.

    2013-12-01

    logging, porosity varies by a factor of 2.5 whilst hydraulic conductivity varies by 2 to 3 orders of magnitude. In addition, a 3D numerical reconstruction of the internal structure of the fault zone inferred from borehole imagery has been built to estimate the permeability tensor variations. First results indicate that hydraulic conductivity values calculated for this structure are 2 to 3 orders of magnitude above those measured in situ. Such high values are due to the imaging method that only takes in to account open fractures of simple geometry (sine waves). Even though improvements are needed to handle more complex geometry, outcomes are promising as the fault damaged zone clearly appears as the highest permeability zone, where stress analysis show that the actual stress state may favor tensile reopening of fractures. Using shale samples cored from the different internal structures of the fault zone, we aim now to characterize the advection and diffusion using laboratory petrophysical tests combined with radial and through-diffusion experiments.

  17. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  18. Estimating elastic moduli of rocks from thin sections: Digital rock study of 3D properties from 2D images

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Mavko, Gary

    2016-03-01

    Estimation of elastic rock moduli using 2D plane strain computations from thin sections has several numerical and analytical advantages over using 3D rock images, including faster computation, smaller memory requirements, and the availability of cheap thin sections. These advantages, however, must be weighed against the estimation accuracy of 3D rock properties from thin sections. We present a new method for predicting elastic properties of natural rocks using thin sections. Our method is based on a simple power-law transform that correlates computed 2D thin section moduli and the corresponding 3D rock moduli. The validity of this transform is established using a dataset comprised of FEM-computed elastic moduli of rock samples from various geologic formations, including Fontainebleau sandstone, Berea sandstone, Bituminous sand, and Grossmont carbonate. We note that using the power-law transform with a power-law coefficient between 0.4-0.6 contains 2D moduli to 3D moduli transformations for all rocks that are considered in this study. We also find that reliable estimates of P-wave (Vp) and S-wave velocity (Vs) trends can be obtained using 2D thin sections.

  19. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  20. Estimation of gold potentials using 3D restoration modeling, Mount Pleasant Area, Western Australia

    NASA Astrophysics Data System (ADS)

    Mejia-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2015-04-01

    A broad variety of gold-deposits are related to fault systems developed during a deformation event. Such discontinuities control the metals transport and allow the relatively high permeability necessary for the metals accumulation during the ore-deposits formation. However, some gold deposits formed during the same deformation event occur at locations far from the main faults. In those cases, the fracture systems are related with the rock heterogeneity that partially controls the damage development on the rock mass. A geo-mechanical 3D restoration modeling approach was used to simulate the strain developed during a stretching episode occurred in the Mount Pleasant region, Western Australia. Firstly a 3D solid-model was created from geological maps and interpreted structural cross-sections available on the studied region. The backward model was obtained flattening a stretching-representative reference surface selected from the lithology sequence. The deformation modeling was carried out on a 3D model built on Gocad/Skua and restored using a full geo-mechanical modeling based on a finite element method used to compute the volume restoration in a 600 m tetrahedral-mesh-resolution solid. The 3D structural restoration of the region was performed flattening surfaces using a flexural slip deformation style. Results show how the rock heterogeneity allows damages in locations far from the fault systems. The distant off-fault damage areas are located preferentially in lithological contacts and also follow the deformation trend of the region. Using a logistic regression method, it is shown that off-fault zones with high gold occurrences correlate spatially on locations with locally-high-gradient first deformational parameter, obtained from the restoration strain field. This contribution may provide some explanation for the presence of gold accumulations away from main fault systems, and the method could be used for inferring favorable areas in exploration surveys.

  1. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  2. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    NASA Astrophysics Data System (ADS)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  3. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing. PMID:25088006

  4. Atmospheric Nitrogen Trifluoride: Optimized emission estimates using 2-D and 3-D Chemical Transport Models from 1973-2008

    NASA Astrophysics Data System (ADS)

    Ivy, D. J.; Rigby, M. L.; Prinn, R. G.; Muhle, J.; Weiss, R. F.

    2009-12-01

    We present optimized annual global emissions from 1973-2008 of nitrogen trifluoride (NF3), a powerful greenhouse gas which is not currently regulated by the Kyoto Protocol. In the past few decades, NF3 production has dramatically increased due to its usage in the semiconductor industry. Emissions were estimated through the 'pulse-method' discrete Kalman filter using both a simple, flexible 2-D 12-box model used in the Advanced Global Atmospheric Gases Experiment (AGAGE) network and the Model for Ozone and Related Tracers (MOZART v4.5), a full 3-D atmospheric chemistry model. No official audited reports of industrial NF3 emissions are available, and with limited information on production, a priori emissions were estimated using both a bottom-up and top-down approach with two different spatial patterns based on semiconductor perfluorocarbon (PFC) emissions from the Emission Database for Global Atmospheric Research (EDGAR v3.2) and Semiconductor Industry Association sales information. Both spatial patterns used in the models gave consistent results, showing the robustness of the estimated global emissions. Differences between estimates using the 2-D and 3-D models can be attributed to transport rates and resolution differences. Additionally, new NF3 industry production and market information is presented. Emission estimates from both the 2-D and 3-D models suggest that either the assumed industry release rate of NF3 or industry production information is still underestimated.

  5. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System.

    PubMed

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R² = 0.98) and 0.57 mm (R² = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  6. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  7. Selecting best-fit models for estimating the body mass from 3D data of the human calcaneus.

    PubMed

    Jung, Go-Un; Lee, U-Young; Kim, Dong-Ho; Kwak, Dai-Soon; Ahn, Yong-Woo; Han, Seung-Ho; Kim, Yi-Suk

    2016-05-01

    Body mass (BM) estimation could facilitate the interpretation of skeletal materials in terms of the individual's body size and physique in forensic anthropology. However, few metric studies have tried to estimate BM by focusing on prominent biomechanical properties of the calcaneus. The purpose of this study was to prepare best-fit models for estimating BM from the 3D human calcaneus by two major linear regression analysis (the heuristic statistical and all-possible-regressions techniques) and validate the models through predicted residual sum of squares (PRESS) statistics. A metric analysis was conducted based on 70 human calcaneus samples (29 males and 41 females) taken from 3D models in the Digital Korean Database and 10 variables were measured for each sample. Three best-fit models were postulated by F-statistics, Mallows' Cp, and Akaike information criterion (AIC) and Bayes information criterion (BIC) for each available candidate models. Finally, the most accurate regression model yields lowest %SEE and 0.843 of R(2). Through the application of leave-one-out cross validation, the predictive power was indicated a high level of validation accuracy. This study also confirms that the equations for estimating BM using 3D models of human calcaneus will be helpful to establish identification in forensic cases with consistent reliability. PMID:26970867

  8. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  9. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2001-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  10. Effect of GIA models with 3D composite mantle viscosity on GRACE mass balance estimates for Antarctica

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Whitehouse, Pippa L.; Schrama, Ernst J. O.

    2015-03-01

    Seismic data indicate that there are large viscosity variations in the mantle beneath Antarctica. Consideration of such variations would affect predictions of models of Glacial Isostatic Adjustment (GIA), which are used to correct satellite measurements of ice mass change. However, most GIA models used for that purpose have assumed the mantle to be uniformly stratified in terms of viscosity. The goal of this study is to estimate the effect of lateral variations in viscosity on Antarctic mass balance estimates derived from the Gravity Recovery and Climate Experiment (GRACE) data. To this end, recently-developed global GIA models based on lateral variations in mantle temperature are tuned to fit constraints in the northern hemisphere and then compared to GPS-derived uplift rates in Antarctica. We find that these models can provide a better fit to GPS uplift rates in Antarctica than existing GIA models with a radially-varying (1D) rheology. When 3D viscosity models in combination with specific ice loading histories are used to correct GRACE measurements, mass loss in Antarctica is smaller than previously found for the same ice loading histories and their preferred 1D viscosity profiles. The variation in mass balance estimates arising from using different plausible realizations of 3D viscosity amounts to 20 Gt/yr for the ICE-5G ice model and 16 Gt/yr for the W12a ice model; these values are larger than the GRACE measurement error, but smaller than the variation arising from unknown ice history. While there exist 1D Earth models that can reproduce the total mass balance estimates derived using 3D Earth models, the spatial pattern of gravity rates can be significantly affected by 3D viscosity in a way that cannot be reproduced by GIA models with 1D viscosity. As an example, models with 1D viscosity always predict maximum gravity rates in the Ross Sea for the ICE-5G ice model, however, for one of the three preferred 3D models the maximum (for the same ice model) is found

  11. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  12. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  13. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  14. CO2 mass estimation visible in time-lapse 3D seismic data from a saline aquifer and uncertainties

    NASA Astrophysics Data System (ADS)

    Ivanova, A.; Lueth, S.; Bergmann, P.; Ivandic, M.

    2014-12-01

    At Ketzin (Germany) the first European onshore pilot scale project for geological storage of CO2 was initiated in 2004. This project is multidisciplinary and includes 3D time-lapse seismic monitoring. A 3D pre-injection seismic survey was acquired in 2005. Then CO2 injection into a sandstone saline aquifer started at a depth of 650 m in 2008. A 1st 3D seismic repeat survey was acquired in 2009 after 22 kilotons had been injected. The imaged CO2 signature was concentrated around the injection well (200-300 m). A 2nd 3D seismic repeat survey was acquired in 2012 after 61 kilotons had been injected. The imaged CO2 signature further extended (100-200 m). The injection was terminated in 2013. Totally 67 kilotons of CO2 were injected. Time-lapse seismic processing, petrophysical data and geophysical logging on CO2 saturation have allowed for an estimate of the amount of CO2 visible in the seismic data. This estimate is dependent upon a choice of a number of parameters and contains a number of uncertainties. The main uncertainties are following. The constant reservoir porosity and CO2 density used for the estimation are probably an over-simplification since the reservoir is quite heterogeneous. May be velocity dispersion is present in the Ketzin reservoir rocks, but we do not consider it to be large enough that it could affect the mass of CO2 in our estimation. There are only a small number of direct petrophysical observations, providing a weak statistical basis for the determination of seismic velocities based on CO2 saturation and we have assumed that the petrophysical experiments were carried out on samples that are representative for the average properties of the whole reservoir. Finally, the most of the time delay values in the both 3D seismic repeat surveys within the amplitude anomaly are near the noise level of 1-2 ms, however a change of 1 ms in the time delay affects significantly the mass estimate, thus the choice of the time-delay cutoff is crucial. In spite

  15. Hierarchical estimation of a dense deformation field for 3-D robust registration.

    PubMed

    Hellier, P; Barillot, C; Mémin, E; Pérez, P

    2001-05-01

    A new method for medical image registration is formulated as a minimization problem involving robust estimators. We propose an efficient hierarchical optimization framework which is both multiresolution and multigrid. An anatomical segmentation of the cortex is introduced in the adaptive partitioning of the volume on which the multigrid minimization is based. This allows to limit the estimation to the areas of interest, to accelerate the algorithm, and to refine the estimation in specified areas. At each stage of the hierarchical estimation, we refine current estimate by seeking a piecewise affine model for the incremental deformation field. The performance of this method is numerically evaluated on simulated data and its benefits and robustness are shown on a database of 18 magnetic resonance imaging scans of the head. PMID:11403198

  16. Building continental-scale 3D subsurface layers in the Digital Crust project: constrained interpolation and uncertainty estimation.

    NASA Astrophysics Data System (ADS)

    Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.

    2015-12-01

    The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.

  17. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  18. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  19. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  20. Leaf Area Index Estimation in Vineyards from Uav Hyperspectral Data, 2d Image Mosaics and 3d Canopy Surface Models

    NASA Astrophysics Data System (ADS)

    Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.

    2015-08-01

    The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.

  1. Body mass estimations for Plateosaurus engelhardti using laser scanning and 3D reconstruction methods

    NASA Astrophysics Data System (ADS)

    Gunga, Hanns-Christian; Suthau, Tim; Bellmann, Anke; Friedrich, Andreas; Schwanebeck, Thomas; Stoinski, Stefan; Trippel, Tobias; Kirsch, Karl; Hellwich, Olaf

    2007-08-01

    Both body mass and surface area are factors determining the essence of any living organism. This should also hold true for an extinct organism such as a dinosaur. The present report discusses the use of a new 3D laser scanner method to establish body masses and surface areas of an Asian elephant (Zoological Museum of Copenhagen, Denmark) and of Plateosaurus engelhardti, a prosauropod from the Upper Triassic, exhibited at the Paleontological Museum in Tübingen (Germany). This method was used to study the effect that slight changes in body shape had on body mass for P. engelhardti. It was established that body volumes varied between 0.79 m3 (slim version) and 1.14 m3 (robust version), resulting in a presumable body mass of 630 and 912 kg, respectively. The total body surface areas ranged between 8.8 and 10.2 m2, of which, in both reconstructions of P. engelhardti, ˜33% account for the thorax area alone. The main difference between the two models is in the tail and hind limb reconstruction. The tail of the slim version has a surface area of 1.98 m2, whereas that of the robust version has a surface area of 2.73 m2. The body volumes calculated for the slim version were as follows: head 0.006 m3, neck 0.016 m3, fore limbs 0.020 m3, hind limbs 0.08 m3, thoracic cavity 0.533 m3, and tail 0.136 m3. For the robust model, the following volumes were established: 0.01 m3 head, neck 0.026 m3, fore limbs 0.025 m3, hind limbs 0.18 m3, thoracic cavity 0.616 m3, and finally, tail 0.28 m3. Based on these body volumes, scaling equations were used to assess the size that the organs of this extinct dinosaur have.

  2. Estimation of vocal fold plane in 3D CT images for diagnosis of vocal fold abnormalities.

    PubMed

    Hewavitharanage, Sajini; Gubbi, Jayavardhana; Thyagarajan, Dominic; Lau, Ken; Palaniswami, Marimuthu

    2015-01-01

    Vocal folds are the key body structures that are responsible for phonation and regulating air movement into and out of lungs. Various vocal fold disorders may seriously impact the quality of life. When diagnosing vocal fold disorders, CT of the neck is the commonly used imaging method. However, vocal folds do not align with the normal axial plane of a neck and the plane containing vocal cords and arytenoids does vary during phonation. It is therefore important to generate an algorithm for detecting the actual plane containing vocal folds. In this paper, we propose a method to automatically estimate the vocal fold plane using vertebral column and anterior commissure localization. Gray-level thresholding, connected component analysis, rule based segmentation and unsupervised k-means clustering were used in the proposed algorithm. The anterior commissure segmentation method achieved an accuracy of 85%, a good estimate of the expert assessment. PMID:26736949

  3. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  4. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  5. Estimation and 3-D modeling of seismic parameters for fluvial systems

    SciTech Connect

    Brown, R.L.; Levey, R.A.

    1994-12-31

    Borehole measurements of parameters related to seismic propagation (Vp, Vs, Qp and Qs) are seldom available at all the wells within an area of study. Well logs and other available data can be used along with certain results from laboratory measurements to predict seismic parameters at wells where these measurements are not available. Next, three dimensional interpolation techniques based upon geological constraints can then be used to estimate the spatial distribution of geophysical parameters within a given environment. The net product is a more realistic model of the distribution of geophysical parameters which can be used in the design of surface and borehole seismic methods for probing the reservoir.

  6. ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation

    NASA Technical Reports Server (NTRS)

    Richardson, A. O.

    1996-01-01

    This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.

  7. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  8. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  9. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  10. A computational model for estimating tumor margins in complementary tactile and 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Shamsil, Arefin; Escoto, Abelardo; Naish, Michael D.; Patel, Rajni V.

    2016-03-01

    Conventional surgical methods are effective for treating lung tumors; however, they impose high trauma and pain to patients. Minimally invasive surgery is a safer alternative as smaller incisions are required to reach the lung; however, it is challenging due to inadequate intraoperative tumor localization. To address this issue, a mechatronic palpation device was developed that incorporates tactile and ultrasound sensors capable of acquiring surface and cross-sectional images of palpated tissue. Initial work focused on tactile image segmentation and fusion of position-tracked tactile images, resulting in a reconstruction of the palpated surface to compute the spatial locations of underlying tumors. This paper presents a computational model capable of analyzing orthogonally-paired tactile and ultrasound images to compute the surface circumference and depth margins of a tumor. The framework also integrates an error compensation technique and an algebraic model to align all of the image pairs and to estimate the tumor depths within the tracked thickness of a palpated tissue. For validation, an ex vivo experimental study was conducted involving the complete palpation of 11 porcine liver tissues injected with iodine-agar tumors of varying sizes and shapes. The resulting tactile and ultrasound images were then processed using the proposed model to compute the tumor margins and compare them to fluoroscopy based physical measurements. The results show a good negative correlation (r = -0.783, p = 0.004) between the tumor surface margins and a good positive correlation (r = 0.743, p = 0.009) between the tumor depth margins.

  11. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  12. Landscape scale estimation of soil carbon stock using 3D modelling.

    PubMed

    Veronesi, F; Corstanje, R; Mayr, T

    2014-07-15

    Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. PMID:24636454

  13. 3D Wind Reconstruction and Turbulence Estimation in the Boundary Layer from Doppler Lidar Measurements using Particle Method

    NASA Astrophysics Data System (ADS)

    Rottner, L.; Baehr, C.

    2014-12-01

    Turbulent phenomena in the atmospheric boundary layer (ABL) are characterized by small spatial and temporal scales which make them difficult to observe and to model.New remote sensing instruments, like Doppler Lidar, give access to fine and high-frequency observations of wind in the ABL. This study suggests to use a method of nonlinear estimation based on these observations to reconstruct 3D wind in a hemispheric volume, and to estimate atmospheric turbulent parameters. The wind observations are associated to particle systems which are driven by a local turbulence model. The particles have both fluid and stochastic properties. Therefore, spatial averages and covariances may be deduced from the particles. Among the innovative aspects, we point out the absence of the common hypothesis of stationary-ergodic turbulence and the non-use of particle model closure hypothesis. Every time observations are available, 3D wind is reconstructed and turbulent parameters such as turbulent kinectic energy, dissipation rate, and Turbulent Intensity (TI) are provided. This study presents some results obtained using real wind measurements provided by a five lines of sight Lidar. Compared with classical methods (e.g. eddy covariance) our technic renders equivalent long time results. Moreover it provides finer and real time turbulence estimations. To assess this new method, we suggest computing independently TI using different observation types. First anemometer data are used to have TI reference.Then raw and filtered Lidar observations have also been compared. The TI obtained from raw data is significantly higher than the reference one, whereas the TI estimated with the new algorithm has the same order.In this study we have presented a new class of algorithm to reconstruct local random media. It offers a new way to understand turbulence in the ABL, in both stable or convective conditions. Later, it could be used to refine turbulence parametrization in meteorological meso-scale models.

  14. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.

    PubMed

    Nosrati, Masoud S; Abugharbieh, Rafeef; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Hamarneh, Ghassan

    2016-01-01

    In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. PMID:26151933

  15. Estimating porosity with ground-penetrating radar reflection tomography: A controlled 3-D experiment at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Bradford, John H.; Clement, William P.; Barrash, Warren

    2009-04-01

    To evaluate the uncertainty of water-saturated sediment velocity and porosity estimates derived from surface-based, ground-penetrating radar reflection tomography, we conducted a controlled field experiment at the Boise Hydrogeophysical Research Site (BHRS). The BHRS is an experimental well field located near Boise, Idaho. The experimental data set consisted of 3-D multioffset radar acquired on an orthogonal 20 × 30 m surface grid that encompassed a set of 13 boreholes. Experimental control included (1) 1-D vertical velocity functions determined from traveltime inversion of vertical radar profiles (VRP) and (2) neutron porosity logs. We estimated the porosity distribution in the saturated zone using both the Topp and Complex Refractive Index Method (CRIM) equations and found the CRIM estimates in better agreement with the neutron logs. We found that when averaged over the length of the borehole, surface-derived velocity measurements were within 5% of the VRP velocities and that the porosity differed from the neutron log by less than 0.05. The uncertainty, however, is scale dependent. We found that the standard deviation of differences between ground-penetrating-radar-derived and neutron-log-derived porosity values was as high as 0.06 at an averaging length of 0.25 m but decreased to less than 0.02 at length scale of 11 m. Additionally, we used the 3-D porosity distribution to identify a relatively high-porosity anomaly (i.e., local sedimentary body) within a lower-porosity unit and verified the presence of the anomaly using the neutron porosity logs. Since the reflection tomography approach requires only surface data, it can provide rapid assessment of bulk hydrologic properties, identify meter-scale anomalies of hydrologic significance, and may provide input for other higher-resolution measurement methods.

  16. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  17. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    NASA Astrophysics Data System (ADS)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  18. A hybrid 3D-Var data assimilation scheme for joint state and parameter estimation: application to morphodynamic modelling

    NASA Astrophysics Data System (ADS)

    Smith, P.; Nichols, N. K.; Dance, S.

    2011-12-01

    Data assimilation is typically used to provide initial conditions for state estimation; combining model predictions with observational data to produce an updated model state that most accurately characterises the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. However, even with perfect initial data, inaccurate representation of model parameters will lead to the growth of model error and therefore affect the ability of our model to accurately predict the true system state. A key question in model development is how to estimate parameters a priori. In most cases, parameter estimation is addressed as a separate issue to state estimation and model calibration is performed offline in a separate calculation. Here we demonstrate how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state as part of the assimilation process. We present a novel hybrid data assimilation algorithm developed for application to parameter estimation in morphodynamic models. The new approach is based on a computationally inexpensive 3D-Var scheme, where the specification of the covariance matrices is crucial for success. For combined state-parameter estimation, it is particularly important that the cross-covariances between the parameters and the state are given a good a priori specification. Early experiments indicated that in order to yield reliable estimates of the true parameters, a flow dependent representation of the state-parameter cross covariances is required. By combining ideas from 3D-Var and the extended Kalman filter we have developed a novel hybrid assimilation scheme that captures the flow dependent nature of the state-parameter cross covariances without the computational expense of explicitly propagating the full system covariance matrix. We will give details of the formulation of this

  19. Effects of social gaze on visual-spatial imagination

    PubMed Central

    Buchanan, Heather; Markson, Lucy; Bertrand, Emma; Greaves, Sian; Parmar, Reena; Paterson, Kevin B.

    2014-01-01

    Previous research suggests that closing one’s eyes or averting one’s gaze from another person can benefit visual-spatial imagination by interrupting cognitive demands associated with face-to-face interaction (Markson and Paterson, 2009). The present study further investigated this influence of social gaze on adults’ visual-spatial imagination, using the matrix task (Kerr, 1987, 1993). Participants mentally kept track of a pathway through an imaginary 2-dimensional (2D) or 3-dimensional (3D) matrix. Concurrent with this task, participants either kept their eyes closed or maintained eye contact with another person, mutual gaze with a person whose eyes were obscured (by wearing dark glasses), or unreciprocated gaze toward the face of a person whose own gaze was averted or whose face was occluded (by placing a paper bag over her head). Performance on the 2D task was poorest in the eye contact condition, and did not differ between the other gaze conditions, which produced ceiling performance. However, the more difficult 3D task revealed clear effects of social gaze. Performance on the 3D task was poorest for eye contact, better for mutual gaze, and equally better still for the unreciprocated gaze and eye-closure conditions. The findings reveal the especially disruptive influence of eye contact on concurrent visual-spatial imagination and a benefit for cognitively demanding tasks of disengaging eye contact during face-to-face interaction. PMID:25071645

  20. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    NASA Astrophysics Data System (ADS)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  1. NavOScan: hassle-free handheld 3D scanning with automatic multi-view registration based on combined optical and inertial pose estimation

    NASA Astrophysics Data System (ADS)

    Munkelt, C.; Kleiner, B.; Thorhallsson, T.; Mendoza, C.; Bräuer-Burchardt, C.; Kühmstedt, P.; Notni, G.

    2013-05-01

    Portable 3D scanners with low measurement uncertainty are ideally suited for capturing the 3D shape of objects right in their natural environment. However, elaborate manual post processing was usually necessary to build a complete 3D model from several overlapping scans (multiple views), or expensive or complex additional hardware (like trackers etc.) was needed. On the contrary, the NavOScan project[1] aims at fully automatic multi-view 3D scan assembly through a Navigation Unit attached to the scanner. This light weight device combines an optical tracking system with an inertial measurement unit (IMU) for robust relative scanner position estimation. The IMU provides robustness against swift scanner movements during view changes, while the wide angle, high dynamic range (HDR) optical tracker focused on the measurement object and its background ensures accurate sensor position estimations. The underlying software framework, partly implemented in hardware (FPGA) for performance reasons, fusions both data streams in real time and estimates the navigation unit's current pose. Using this pose to calculate the starting solution of the Iterative Closest Point registration approach allows for automatic registration of multiple 3D scans. After finishing the individual scans required to fully acquire the object in question, the operator is readily presented with its finalized complete 3D model! The paper presents an overview over the NavOScan architecture, highlights key aspects of the registration and navigation pipeline and shows several measurement examples obtained with the Navigation Unit attached to a hand held structured-light 3D scanner.

  2. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  3. Estimation of water saturated permeability of soils, using 3D soil tomographic images and pore-level transport phenomena modelling

    NASA Astrophysics Data System (ADS)

    Lamorski, Krzysztof; Sławiński, Cezary; Barna, Gyöngyi

    2014-05-01

    There are some important macroscopic properties of the soil porous media such as: saturated permeability and water retention characteristics. These soil characteristics are very important as they determine soil transport processes and are commonly used as a parameters of general models of soil transport processes used extensively for scientific developments and engineering practise. These characteristics are usually measured or estimated using some statistical or phenomenological modelling, i.e. pedotransfer functions. On the physical basis, saturated soil permeability arises from physical transport processes occurring at the pore level. Current progress in modelling techniques, computational methods and X-ray micro-tomographic technology gives opportunity to use direct methods of physical modelling for pore level transport processes. Physically valid description of transport processes at micro-scale based on Navier-Stokes type modelling approach gives chance to recover macroscopic porous medium characteristics from micro-flow modelling. Water microflow transport processes occurring at the pore level are dependent on the microstructure of porous body and interactions between the fluid and the medium. In case of soils, i.e. the medium there exist relatively big pores in which water can move easily but also finer pores are present in which water transport processes are dominated by strong interactions between the medium and the fluid - full physical description of these phenomena is a challenge. Ten samples of different soils were scanned using X-ray computational microtomograph. The diameter of samples was 5 mm. The voxel resolution of CT scan was 2.5 µm. Resulting 3D soil samples images were used for reconstruction of the pore space for further modelling. 3D image threshholding was made to determine the soil grain surface. This surface was triangulated and used for computational mesh construction for the pore space. Numerical modelling of water flow through the

  4. Protocol for Translabial 3D-Ultrasonography for diagnosing levator defects (TRUDIL): a multicentre cohort study for estimating the diagnostic accuracy of translabial 3D-ultrasonography of the pelvic floor as compared to MR imaging

    PubMed Central

    2011-01-01

    Background Pelvic organ prolapse (POP) is a condition affecting more than half of the women above age 40. The estimated lifetime risk of needing surgical management for POP is 11%. In patients undergoing POP surgery of the anterior vaginal wall, the re-operation rate is 30%. The recurrence risk is especially high in women with a levator ani defect. Such defect is present if there is a partially or completely detachment of the levator ani from the inferior ramus of the symphysis. Detecting levator ani defects is relevant for counseling, and probably also for treatment. Levator ani defects can be imaged with MRI and also with Translabial 3D ultrasonography of the pelvic floor. The primary aim of this study is to assess the diagnostic accuracy of translabial 3D ultrasonography for diagnosing levator defects in women with POP with Magnetic Resonance Imaging as the reference standard. Secondary goals of this study include quantification of the inter-observer agreement about levator ani defects and determining the association between levator defects and recurrent POP after anterior repair. In addition, the cost-effectiveness of adding translabial ultrasonography to the diagnostic work-up in patients with POP will be estimated in a decision analytic model. Methods/Design A multicentre cohort study will be performed in nine Dutch hospitals. 140 consecutive women with a POPQ stage 2 or more anterior vaginal wall prolapse, who are indicated for anterior colporapphy will be included. Patients undergoing additional prolapse procedures will also be included. Prior to surgery, patients will undergo MR imaging and translabial 3D ultrasound examination of the pelvic floor. Patients will be asked to complete validated disease specific quality of life questionnaires before surgery and at six and twelve months after surgery. Pelvic examination will be performed at the same time points. Assuming a sensitivity and specificity of 90% of 3D ultrasound for diagnosing levator defects in a

  5. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  6. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    , EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. Klokočník, J., Kostelecký, J., Eppelbaum, L. and Bezděk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and

  7. Estimating 3D variation in active-layer thickness beneath arctic streams using ground-penetrating radar

    USGS Publications Warehouse

    Brosten, T.R.; Bradford, J.H.; McNamara, J.P.; Gooseff, M.N.; Zarnetske, J.P.; Bowden, W.B.; Johnston, M.E.

    2009-01-01

    We acquired three-dimensional (3D) ground-penetrating radar (GPR) data across three stream sites on the North Slope, AK, in August 2005, to investigate the dependence of thaw depth on channel morphology. Data were migrated with mean velocities derived from multi-offset GPR profiles collected across a stream section within each of the 3D survey areas. GPR data interpretations from the alluvial-lined stream site illustrate greater thaw depths beneath riffle and gravel bar features relative to neighboring pool features. The peat-lined stream sites indicate the opposite; greater thaw depths beneath pools and shallower thaw beneath the connecting runs. Results provide detailed 3D geometry of active-layer thaw depths that can support hydrological studies seeking to quantify transport and biogeochemical processes that occur within the hyporheic zone.

  8. Modification of Eccentric Gaze-Holding

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Paloski, W. H.; Somers, J. T.; Leigh, R. J.; Wood, S. J.; Kornilova, L.

    2006-01-01

    effects of acceleration on gaze stability was examined during centrifugation (+2 G (sub x) and +2 G (sub z) using a total of 23 subjects. In all of our investigations eccentric gaze-holding was established by having the subjects acquire an eccentric target (+/-30 degrees horizontal, +/- 15 degrees vertical) that was flashed for 750 msec in an otherwise dark room. Subjects were instructed to hold gaze on the remembered position of the flashed target for 20 sec. Immediately following the 20 sec period, subjects were cued to return to the remembered center position and to hold gaze there for an additional 20 sec. Following this 20 sec period the center target was briefly flashed and the subject made any corrective eye movement back to the true center position. Conventionally, the ability to hold eccentric gaze is estimated by fitting the natural log of centripetal eye drifts by linear regression and calculating the time constant (G) of these slow phases of "gaze-evoked nystagmus". However, because our normative subjects sometimes showed essentially no drift (tau (sub c) = m), statistical estimation and inference on the effect of target direction was performed on values of the decay constant theta = 1/(tau (sub c)) which we found was well modeled by a gamma distribution. Subjects showed substantial variance of their eye drifts, which were centrifugal in approximately 20 % of cases, and > 40% for down gaze. Using the ensuing estimated gamma distributions, we were able to conclude that rightward and leftward gaze holding were not significantly different, but that upward gaze holding was significantly worse than downward (p<0.05). We also concluded that vertical gaze holding was significantly worse than horizontal (p<0.05). In the case of left and right roll, we found that both had a similar improvement to horizontal gaze holding (p<0.05), but didn't have a significant effect on vertical gaze holding. For pitch tilts, both tilt angles significantly decreased gaze-holding ility

  9. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  10. Mechanistic and quantitative studies of bystander response in 3D tissues for low-dose radiation risk estimations

    SciTech Connect

    Amundson, Sally A.

    2013-06-12

    We have used the MatTek 3-dimensional human skin model to study the gene expression response of a 3D model to low and high dose low LET radiation, and to study the radiation bystander effect as a function of distance from the site of irradiation with either alpha particles or low LET protons. We have found response pathways that appear to be specific for low dose exposures, that could not have been predicted from high dose studies. We also report the time and distance dependent expression of a large number of genes in bystander tissue. the bystander response in 3D tissues showed many similarities to that described previously in 2D cultured cells, but also showed some differences.

  11. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  12. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    It is well-known that the local seismic site effects may have a significant contribution to the intensity of damage and destruction (e.g., Hough et al., 1990; Regnier et al., 2000; Bonnefoy-Claudet et al., 2006; Haase et al., 2010). The thicknesses of sediments, which play a large role in amplification, usually are derived from seismic velocities. At the same time, thickness of sediments may be determined (or defined) on the basis of 3D combined gravity-magnetic modeling joined with available geological materials, seismic data and borehole section examination. Final result of such investigation is a 3D physical-geological model (PGM) reflecting main geological peculiarities of the area under study. Such a combined study needs in application of a reliable 3D mathematical algorithm of computation together with advanced methodology of 3D modeling. For this analysis the developed GSFC software was selected. The GSFC (Geological Space Field Calculation) program was developed for solving a direct 3-D gravity and magnetic prospecting problem under complex geological conditions (Khesin et al., 1996; Eppelbaum and Khesin, 2004). This program has been designed for computing the field of Δg (Bouguer, free-air or observed value anomalies), ΔZ, ΔX, ΔY , ΔT , as well as second derivatives of the gravitational potential under conditions of rugged relief and inclined magnetization. The geological space can be approximated by (1) three-dimensional, (2) semi-infinite bodies and (3) those infinite along the strike closed, L.H. non-closed, R.H. on-closed and open). Geological bodies are approximated by horizontal polygonal prisms. The program has the following main advantages (besides abovementioned ones): (1) Simultaneous computing of gravity and magnetic fields; (2) Description of the terrain relief by irregularly placed characteristic points; (3) Computation of the effect of the earth-air boundary by the method of selection directly in the process of interpretation; (4

  13. Age Estimation in Living Adults using 3D Volume Rendered CT Images of the Sternal Plastron and Lower Chest.

    PubMed

    Oldrini, Guillaume; Harter, Valentin; Witte, Yannick; Martrille, Laurent; Blum, Alain

    2016-01-01

    Age estimation is commonly of interest in a judicial context. In adults, it is less documented than in children. The aim of this study was to evaluate age estimation in adults using CT images of the sternal plastron with volume rendering technique (VRT). The evaluation criteria are derived from known methods used for age estimation and are applicable in living or dead subjects. The VRT images of 456 patients were analyzed. Two radiologists performed age estimation independently from an anterior view of the plastron. Interobserver agreement and correlation coefficients between each reader's classification and real age were calculated. The interobserver agreement was 0.86, and the correlation coefficients between readers classifications and real age classes were 0.60 and 0.65. Spearman correlation coefficients were, respectively, 0.89, 0.67, and 0.71. Analysis of the plastron using VRT allows age estimation in vivo quickly and with results similar than methods such as Iscan, Suchey-Brooks, and radiographs used to estimate the age of death. PMID:27092960

  14. The 2D versus 3D imaging trade-off: The impact of over- or under-estimating small throats for simulating permeability in porous media

    NASA Astrophysics Data System (ADS)

    Peters, C. A.; Crandell, L. E.; Um, W.; Jones, K. W.; Lindquist, W. B.

    2011-12-01

    Geochemical reactions in the subsurface can alter the porosity and permeability of a porous medium through mineral precipitation and dissolution. While effects on porosity are relatively well understood, changes in permeability are more difficult to estimate. In this work, pore-network modeling is used to estimate the permeability of a porous medium using pore and throat size distributions. These distributions can be determined from 2D Scanning Electron Microscopy (SEM) images of thin sections or from 3D X-ray Computed Tomography (CT) images of small cores. Each method has unique advantages as well as unique sources of error. 3D CT imaging has the advantage of reconstructing a 3D pore network without the inherent geometry-based biases of 2D images but is limited by resolutions around 1 μm. 2D SEM imaging has the advantage of higher resolution, and the ability to examine sub-grain scale variations in porosity and mineralogy, but is limited by the small size of the sample of pores that are quantified. A pore network model was created to estimate flow permeability in a sand-packed experimental column investigating reaction of sediments with caustic radioactive tank wastes in the context of the Hanford, WA site. Before, periodically during, and after reaction, 3D images of the porous medium in the column were produced using the X2B beam line facility at the National Synchrotron Light Source (NSLS) at Brookhaven National Lab. These images were interpreted using 3DMA-Rock to characterize the pore and throat size distributions. After completion of the experiment, the column was sectioned and imaged using 2D SEM in backscattered electron mode. The 2D images were interpreted using erosion-dilation to estimate the pore and throat size distributions. A bias correction was determined by comparison with the 3D image data. A special image processing method was developed to infer the pore space before reaction by digitally removing the precipitate. The different sets of pore

  15. Simultaneous estimation of size, radial and angular locations of a malignant tumor in a 3-D human breast - A numerical study.

    PubMed

    Das, Koushik; Mishra, Subhash C

    2015-08-01

    This article reports a numerical study pertaining to simultaneous estimation of size, radial location and angular location of a malignant tumor in a 3-D human breast. The breast skin surface temperature profile is specific to a tumor of specific size and location. The temperature profiles are always the Gaussian one, though their peak magnitudes and areas differ according to the size and location of the tumor. The temperature profiles are obtained by solving the Pennes bioheat equation using the finite element method based solver COMSOL 4.3a. With temperature profiles known, simultaneous estimation of size, radial location and angular location of the tumor is done using the curve fitting method. Effect of measurement errors is also included in the study. Estimations are accurate, and since in the inverse analysis, the curve fitting method does not require solution of the governing bioheat equation, the estimation is very fast. PMID:26267509

  16. Eye gaze tracking using correlation filters

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Bolme, David; Boehnen, Chris

    2014-03-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  17. Eye Gaze Tracking using Correlation Filters

    SciTech Connect

    Karakaya, Mahmut; Boehnen, Chris Bensing; Bolme, David S; Mahallesi, Mevlana; Kayseri, Talas

    2014-01-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm s length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  18. Estimating the subsurface temperature of Hessen/Germany based on a GOCAD 3D structural model - a comparison of numerical and geostatistical approaches

    NASA Astrophysics Data System (ADS)

    Rühaak, W.; Bär, K.; Sass, I.

    2012-04-01

    Based on a 3D structural GOCAD model of the German federal state Hessen the subsurface temperature distribution is computed. Since subsurface temperature data for greater depth are typically sparse, two different approaches for estimating the spatial subsurface temperature distribution are tested. One approach is the numerical computation of a 3D purely conductive steady state temperature distribution. This numerical model is based on measured thermal conductivity data for all relevant geological units, together with heat flow measurements and surface temperatures. The model is calibrated using continuous temperature-logs. Here only conductive heat transfer is considered as data for convective heat transport at great depth are currently not available. The other approach is by 3D ordinary Kriging; applying a modified approach where the quality of the temperature measurements is taken into account. A difficult but important part here is to derive good variograms for the horizontal and vertical direction. The variograms give necessary information about the spatial dependence. Both approaches are compared and discussed. Differences are mainly related due to convective processes, which are reflected by the interpolation result, but not by the numerical model. Therefore, a comparison of the two results is a good way to obtain information about flow processes in such great depth. This way an improved understanding of this mid enthalpy geothermal reservoir (1000 - 6000 m) is possible. Future work will be the reduction of the small but - especially for depth up to approximately 1000 m - relevant paleoclimate signal.

  19. Forest Inventory Attribute Estimation Using Airborne Laser Scanning, Aerial Stereo Imagery, Radargrammetry and Interferometry-Finnish Experiences of the 3d Techniques

    NASA Astrophysics Data System (ADS)

    Holopainen, M.; Vastaranta, M.; Karjalainen, M.; Karila, K.; Kaasalainen, S.; Honkavaara, E.; Hyyppä, J.

    2015-03-01

    Three-dimensional (3D) remote sensing has enabled detailed mapping of terrain and vegetation heights. Consequently, forest inventory attributes are estimated more and more using point clouds and normalized surface models. In practical applications, mainly airborne laser scanning (ALS) has been used in forest resource mapping. The current status is that ALS-based forest inventories are widespread, and the popularity of ALS has also raised interest toward alternative 3D techniques, including airborne and spaceborne techniques. Point clouds can be generated using photogrammetry, radargrammetry and interferometry. Airborne stereo imagery can be used in deriving photogrammetric point clouds, as very-high-resolution synthetic aperture radar (SAR) data are used in radargrammetry and interferometry. ALS is capable of mapping both the terrain and tree heights in mixed forest conditions, which is an advantage over aerial images or SAR data. However, in many jurisdictions, a detailed ALS-based digital terrain model is already available, and that enables linking photogrammetric or SAR-derived heights to heights above the ground. In other words, in forest conditions, the height of single trees, height of the canopy and/or density of the canopy can be measured and used in estimation of forest inventory attributes. In this paper, first we review experiences of the use of digital stereo imagery and spaceborne SAR in estimation of forest inventory attributes in Finland, and we compare techniques to ALS. In addition, we aim to present new implications based on our experiences.

  20. Simultaneous estimation of the 3-D soot temperature and volume fraction distributions in asymmetric flames using high-speed stereoscopic images.

    PubMed

    Huang, Qunxing; Wang, Fei; Yan, Jianhua; Chi, Yong

    2012-05-20

    An inverse radiation analysis using soot emission measured by a high-speed stereoscopic imaging system is described for simultaneous estimation of the 3-D soot temperature and volume fraction distributions in unsteady sooty flames. A new iterative reconstruction method taking self attenuation into account is developed based on the least squares minimum-residual algorithm. Numerical assessment and experimental measurement results of an ethylene/air diffusive flame show that the proposed method is efficient and capable of reconstructing the soot temperature and volume fraction distributions in unsteady flames. The accuracy is improved when self attenuation is considered. PMID:22614600

  1. 3d morphometric analysis of lunar impact craters: a tool for degradation estimates and interpretation of maria stratigraphy

    NASA Astrophysics Data System (ADS)

    Vivaldi, Valerio; Massironi, Matteo; Ninfo, Andrea; Cremonese, Gabriele

    2015-04-01

    In this study we have applied 3D morphometric analysis of impact craters on the Moon by means of high resolution DTMs derived from LROC (Lunar Reconnaissance Orbiter Camera) NAC (Narrow Angle Camera) (0.5 to 1.5 m/pixel). The objective is twofold: i) evaluating crater degradation and ii) exploring the potential of this approach for Maria stratigraphic interpretation. In relation to the first objective we have considered several craters with different diameters representative of the four classes of degradation being C1 the freshest and C4 the most degraded ones (Arthur et al., 1963; Wilhelms, 1987). DTMs of these craters were elaborated according to a multiscalar approach (Wood, 1996) by testing different ranges of kernel sizes (e.g. 15-35-50-75-100), in order to retrieve morphometric variables such as slope, curvatures and openness. In particular, curvatures were calculated along different planes (e.g. profile curvature and plan curvature) and used to characterize the different sectors of a crater (rim crest, floor, internal slope and related boundaries) enabling us to evaluate its degradation. The gradient of the internal slope of different craters representative of the four classes shows a decrease of the slope mean value from C1 to C4 in relation to crater age and diameter. Indeed degradation is influenced by gravitational processes (landslides, dry flows), as well as space weathering that induces both smoothing effects on the morphologies and infilling processes within the crater, with the main results of lowering and enlarging the rim crest, and shallowing the crater depth. As far as the stratigraphic application is concerned, morphometric analysis was applied to recognize morphologic features within some simple craters, in order to understand the stratigraphic relationships among different lava layers within Mare Serenitatis. A clear-cut rheological boundary at a depth of 200 m within the small fresh Linnè crater (diameter: 2.22 km), firstly hypothesized

  2. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  3. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system.

    PubMed

    Lee, J; Yun, G S; Lee, J E; Kim, M; Choi, M J; Lee, W; Park, H K; Domier, C W; Luhmann, N C; Sabbagh, S A; Park, Y S; Lee, S G; Bak, J G

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils. PMID:24985817

  4. Gaze Cueing of Attention

    PubMed Central

    Frischen, Alexandra; Bayliss, Andrew P.; Tipper, Steven P.

    2007-01-01

    During social interactions, people’s eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience. PMID:17592962

  5. 3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study

    NASA Astrophysics Data System (ADS)

    Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.

    2015-03-01

    Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.

  6. A fast 3D surface reconstruction and volume estimation method for grain storage based on priori model

    NASA Astrophysics Data System (ADS)

    Liang, Xian-hua; Sun, Wei-dong

    2011-06-01

    Inventory checking is one of the most significant parts for grain reserves, and plays a very important role on the macro-control of food and food security. Simple, fast and accurate method to obtain internal structure information and further to estimate the volume of the grain storage is needed. Here in our developed system, a special designed multi-site laser scanning system is used to acquire the range data clouds of the internal structure of the grain storage. However, due to the seriously uneven distribution of the range data, this data should firstly be preprocessed by an adaptive re-sampling method to reduce the data redundancy as well as noise. Then the range data is segmented and useful features, such as plane and cylinder information, are extracted. With these features a coarse registration between all of these single-site range data is done, and then an Iterative Closest Point (ICP) algorithm is carried out to achieve fine registration. Taking advantage of the structure of the grain storage being well defined and the types of them are limited, a fast automatic registration method based on the priori model is proposed to register the multi-sites range data more efficiently. Then after the integration of the multi-sites range data, the grain surface is finally reconstructed by a delaunay based algorithm and the grain volume is estimated by a numerical integration method. This proposed new method has been applied to two common types of grain storage, and experimental results shown this method is more effective and accurate, and it can also avoids the cumulative effect of errors when registering the overlapped area pair-wisely.

  7. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. PMID:26795123

  8. Gaze as a biometric

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  9. Gaze as a biometric

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  10. Principal curves for lumen center extraction and flow channel width estimation in 3-D arterial networks: theory, algorithm, and validation.

    PubMed

    Wong, Wilbur C K; So, Ronald W K; Chung, Albert C S

    2012-04-01

    We present an energy-minimization-based framework for locating the centerline and estimating the width of tubelike objects from their structural network with a nonparametric model. The nonparametric representation promotes simple modeling of nested branches and n -way furcations, i.e., structures that abound in an arterial network, e.g., a cerebrovascular circulation. Our method is capable of extracting the entire vascular tree from an angiogram in a single execution with a proper initialization. A succinct initial model from the user with arterial network inlets, outlets, and branching points is sufficient for complex vasculature. The novel method is based upon the theory of principal curves. In this paper, theoretical extension to grayscale angiography is discussed, and an algorithm to find an arterial network as principal curves is also described. Quantitative validation on a number of simulated data sets, synthetic volumes of 19 BrainWeb vascular models, and 32 Rotterdam Coronary Artery volumes was conducted. We compared the algorithm to a state-of-the-art method and further tested it on two clinical data sets. Our algorithmic outputs-lumen centers and flow channel widths-are important to various medical and clinical applications, e.g., vasculature segmentation, registration and visualization, virtual angioscopy, and vascular atlas formation and population study. PMID:22167625

  11. Improving the Accuracy of Estimated 3d Positions Using Multi-Temporal Alos/prism Triplet Images

    NASA Astrophysics Data System (ADS)

    Susaki, J.; Kishimoto, H.

    2015-03-01

    In this paper, we present a method to improve the accuracy of a digital surface model (DSM) by utilizing multi-temporal triplet images. The Advanced Land Observing Satellite (ALOS) / Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) measures triplet images in the forward, nadir, and backward view directions, and a DSM is generated from the obtained set of triplet images. To generate a certain period of DSM, multiple DSMs generated from individual triplet images are compared, and outliers are removed. Our proposed method uses a traditional surveying approach to increase observations and solves multiple observation equations from all triplet images via the bias-corrected rational polynomial coefficient (RPC) model. Experimental results from using five sets of PRISM triplet images taken of the area around Saitama, north of Tokyo, Japan, showed that the average planimetric and height errors in the coordinates estimated from multi-temporal triplet images were 3.26 m and 2.71 m, respectively, and that they were smaller than those generated by using each set of triplet images individually. As a result, we conclude that the proposed method is effective for stably generating accurate DSMs from multi-temporal triplet images.

  12. Scarce water resources and scarce data: Estimating recharge for a complex 3D groundwater flow model in arid regions

    NASA Astrophysics Data System (ADS)

    Gräbe, A. C.; Guttman, J.; Rödiger, T.; Siebert, C.; Merz, R.; Kolditz, O.

    2012-12-01

    Semi-arid to arid regions are usually characterized by a scarcity of precipitation and a lack of stream flow. Especially in desert environments, groundwater is one of the most important fresh water sources and its recharge is basically controlled by two main mechanisms: the direct regional infiltration of precipitation in the mountains and interdrainage areas in the first place and secondly the flood water infiltration through ephemeral channel beds (transmission loss). Due to extensive spatio-temporal data scarcity, direct quantitative estimations of groundwater recharge are often difficult to perform, and numerical models simulating the water fluxes, have to be applied to enable a quantitative approximation of the groundwater recharge. We made an assumption about the quantity of recharge for the subsurface catchment of the western Dead Sea escarpment, which is at the same time the input for the complex groundwater flow model of the Judea Group Aquifer. This can only be suggested if the hydrogeological situation in the tectonically complex region is fully understood. A number of simplified models of the Judea Group aquifer have been formulated and employed using a two-dimensional (one horizontal layered) numerical simulation of groundwater flow (Baida et al. 1978; Goldschtoff & Shachnai, 1980; Guttman, 2000; Laronne Ben-Itzhak & Gvirtzmann, 2005). However, all previous approaches focused only on a limited area of the Judea Group aquifer. We developed a high resolution regional groundwater flow model for the entire western basin of the Dead Sea. Whereas the structural model could be defined using a large geological dataset, the challenge was to generate the groundwater flow model with only limited well data. With the help of the scientific software OpenGeoSys (OGS) the challenge was reliably solved resulting in a simulation of the hydraulic characteristics (hydraulic conductivity and hydraulic head) of the cretaceous aquifer system, which was calibrated using PEST.

  13. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  14. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  15. Estimation of Effective Transmission Loss Due to Subtropical Hydrometeor Scatters using a 3D Rain Cell Model for Centimeter and Millimeter Wave Applications

    NASA Astrophysics Data System (ADS)

    Ojo, J. S.; Owolawi, P. A.

    2014-12-01

    The problem of hydrometeor scattering on microwave radio communication down links continues to be of interest as the number of the ground and earth space terminals continually grows The interference resulting from the hydrometeor scattering usually leads to the reduction in the signal-to-noise ratio ( SNR) at the affected terminal and at worst can even end up in total link outage. In this paper, an attempt has been made to compute the effective transmission loss due to subtropical hydrometeors on vertically polarized signals in Earth-satellite propagation paths in the Ku, Ka and V band frequencies based on the modified Capsoni 3D rain cell model. The 3D rain cell model has been adopted and modified using the subtropical log-normal distributions of raindrop sizes and introducing the equivalent path length through rain in the estimation of the attenuation instead of the usual specific attenuation in order to account for the attenuation of both wanted and unwanted paths to the receiver. The co-channels, interference at the same frequency is very prone to the higher amount of unwanted signal at the elevation considered. The importance of joint transmission is also considered.

  16. Three-dimensional (3D) coseismic deformation map produced by the 2014 South Napa Earthquake estimated and modeled by SAR and GPS data integration

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Albano, Matteo; Fernández, José; Palano, Mimmo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2016-04-01

    In this work we present a 3D map of coseismic displacements due to the 2014 Mw 6.0 South Napa earthquake, California, obtained by integrating displacement information data from SAR Interferometry (InSAR), Multiple Aperture Interferometry (MAI), Pixel Offset Tracking (POT) and GPS data acquired by both permanent stations and campaigns sites. This seismic event produced significant surface deformation along the 3D components causing several damages to vineyards, roads and houses. The remote sensing results, i.e. InSAR, MAI and POT, were obtained from the pair of SAR images provided by the Sentinel-1 satellite, launched on April 3rd, 2014. They were acquired on August 7th and 31st along descending orbits with an incidence angle of about 23°. The GPS dataset includes measurements from 32 stations belonging to the Bay Area Regional Deformation Network (BARDN), 301 continuous stations available from the UNAVCO and the CDDIS archives, and 13 additional campaign sites from Barnhart et al, 2014 [1]. These data constrain the horizontal and vertical displacement components proving to be helpful for the adopted integration method. We exploit the Bayes theory to search for the 3D coseismic displacement components. In particular, for each point, we construct an energy function and solve the problem to find a global minimum. Experimental results are consistent with a strike-slip fault mechanism with an approximately NW-SE fault plane. Indeed, the 3D displacement map shows a strong North-South (NS) component, peaking at about 15 cm, a few kilometers far from the epicenter. The East-West (EW) displacement component reaches its maximum (~10 cm) south of the city of Napa, whereas the vertical one (UP) is smaller, although a subsidence in the order of 8 cm on the east side of the fault can be observed. A source modelling was performed by inverting the estimated displacement components. The best fitting model is given by a ~N330° E-oriented and ~70° dipping fault with a prevailing

  17. Computer generation and application of 3-D model porous media: From pore-level geostatistics to the estimation of formation factor

    SciTech Connect

    Ioannidis, M.; Kwiecien, M.; Chatzis, I.

    1995-12-31

    This paper describes a new method for the computer generation of 3-D stochastic realizations of porous media using geostatistical information obtained from high-contrast 2-D images of pore casts. The stochastic method yields model porous media with statistical properties identical to those of their real counterparts. Synthetic media obtained in this manner can form the basis for a number of studies related to the detailed characterization of the porous microstructure and, ultimately, the prediction of important petrophysical and reservoir engineering properties. In this context, direct computer estimation of the formation resistivity factor is examined using a discrete random walk algorithm. The dependence of formation factor on measureable statistical properties of the pore space is also investigated.

  18. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  19. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  20. Bayesian Estimation of 3D Non-planar Fault Geometry and Slip: An application to the 2011 Megathrust (Mw 9.1) Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón

    2016-04-01

    Earthquake faults are generally considered planar (or of other simple geometry) in earthquake source parameter estimations. However, simplistic fault geometries likely result in biases in estimated slip distributions and increased fault slip uncertainties. In case of large subduction zone earthquakes, these biases and uncertainties propagate into tsunami waveform modeling and other calculations related to postseismic studies, Coulomb failure stresses, etc. In this research, we parameterize 3D non-planar fault geometry for the 2011 Tohoku-Oki earthquake (Mw 9.1) and estimate these geometrical parameters along with fault slip parameters from onland and offshore GPS using Bayesian inference. This non-planar fault is formed using several 3rd degree polynomials in along-strike (X-Y plane) and along-dip (X-Z plane) directions that are tied together using a triangular mesh. The coefficients of these polynomials constitute the fault geometrical parameters. We use the trench and locations of past seismicity as a priori information to constrain these fault geometrical parameters and the Laplacian to characterize the fault slip smoothness. Hyper-parameters associated to these a priori constraints are estimated empirically and the posterior probability distribution of the model (fault geometry and slip) parameters is sampled using an adaptive Metropolis Hastings algorithm. The across-strike uncertainties in the fault geometry (effectively the local fault location) around high-slip patches increases from 6 km at 10km depth to about 35 km at 50km depth, whereas around low-slip patches the uncertainties are larger (from 7 km to 70 km). Uncertainties in reverse slip are found to be higher at high slip patches than at low slip patches. In addition, there appears to be high correlation between adjacent patches of high slip. Our results demonstrate that we can constrain complex non-planar fault geometry together with fault slip from GPS data using past seismicity as a priori

  1. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  2. Mobile gaze tracking system for outdoor walking behavioral studies

    PubMed Central

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R.; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  3. Mobile gaze tracking system for outdoor walking behavioral studies.

    PubMed

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  4. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  5. 3D Transient Hydraulic Tomography (3DTHT): An Efficient Field and Modeling Method for High-Resolution Estimation of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.

    2012-12-01

    The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3

  6. Different scenarios for inverse estimation of soil hydraulic parameters from double-ring infiltrometer data using HYDRUS-2D/3D

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Parisa; Ghorbani-Dashtaki, Shoja; Mosaddeghi, Mohammad Reza; Shirani, Hossein; Nodoushan, Ali Reza Mohammadi

    2016-04-01

    In this study, HYDRUS-2D/3D was used to simulate ponded infiltration through double-ring infiltrometers into a hypothetical loamy soil profile. Twelve scenarios of inverse modelling (divided into three groups) were considered for estimation of Mualem-van Genuchten hydraulic parameters. In the first group, simulation was carried out solely using cumulative infiltration data. In the second group, cumulative infiltration data plus water content at h = -330 cm (field capacity) were used as inputs. In the third group, cumulative infiltration data plus water contents at h = -330 cm (field capacity) and h = -15 000 cm (permanent wilting point) were used simultaneously as predictors. The results showed that numerical inverse modelling of the double-ring infiltrometer data provided a reliable alternative method for determining soil hydraulic parameters. The results also indicated that by reducing the number of hydraulic parameters involved in the optimization process, the simulation error is reduced. The best one in infiltration simulation which parameters α, n, and Ks were optimized using the infiltration data and field capacity as inputs. Including field capacity as additional data was important for better optimization/definition of soil hydraulic functions, but using field capacity and permanent wilting point simultaneously as additional data increased the simulation error.

  7. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  8. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  9. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  10. Image free-viewing as intrinsically-motivated exploration: estimating the learnability of center-of-gaze image samples in infants and adults

    PubMed Central

    Schlesinger, Matthew; Amso, Dima

    2013-01-01

    We propose that free viewing of natural images in human infants can be understood and analyzed as the product of intrinsically-motivated visual exploration. We examined this idea by first generating five sets of center-of-gaze (COG) image samples, which were derived by presenting a series of natural images to groups of both real observers (i.e., 9-month-olds and adults) and artificial observers (i.e., an image-saliency model, an image-entropy model, and a random-gaze model). In order to assess the sequential learnability of the COG samples, we paired each group of samples with a simple recurrent network, which was trained to reproduce the corresponding sequence of COG samples. We then asked whether an intrinsically-motivated artificial agent would learn to identify the most successful network. In Simulation 1, the agent was rewarded for selecting the observer group and network with the lowest prediction errors, while in Simulation 2 the agent was rewarded for selecting the observer group and network with the largest rate of improvement. Our prediction was that if visual exploration in infants is intrinsically-motivated—and more specifically, the goal of exploration is to learn to produce sequentially-predictable gaze patterns—then the agent would show a preference for the COG samples produced by the infants over the other four observer groups. The results from both simulations supported our prediction. We conclude by highlighting the implications of our approach for understanding visual development in infants, and discussing how the model can be elaborated and improved. PMID:24198801

  11. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  12. Complementary effects of gaze direction and early saliency in guiding fixations during free viewing.

    PubMed

    Borji, Ali; Parks, Daniel; Itti, Laurent

    2014-01-01

    Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered

  13. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis

    PubMed Central

    Menéndez-González, Manuel; Salas-Pacheco, José M.; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the “yearly rate of Relative Thalamic Atrophy” (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  14. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis.

    PubMed

    Menéndez-González, Manuel; Salas-Pacheco, José M; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the "yearly rate of Relative Thalamic Atrophy" (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  15. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  16. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  17. Eye-head coordination during large gaze shifts.

    PubMed

    Tweed, D; Glenn, B; Vilis, T

    1995-02-01

    1. Three-dimensional (3D) eye and head rotations were measured with the use of the magnetic search coil technique in six healthy human subjects as they made large gaze shifts. The aims of this study were 1) to see whether the kinematic rules that constrain eye and head orientations to two degrees of freedom between saccades also hold during movements; 2) to chart the curvature and looping in eye and head trajectories; and 3) to assess whether the timing and paths of eye and head movements are more compatible with a single gaze error command driving both movements, or with two different feedback loops. 2. Static orientations of the eye and head relative to space are known to resemble the distribution that would be generated by a Fick gimbal (a horizontal axis moving on a fixed vertical axis). We show that gaze point trajectories during eye-head gaze shifts fit the Fick gimbal pattern, with horizontal movements following straight "line of latitude" paths and vertical movements curving like lines of longitude. However, horizontal (and to a lesser extent vertical) movements showed direction-dependent looping, with rightward and leftward (and up and down) saccades tracing slightly different paths. Plots of facing direction (the analogue of gaze direction for the head) also showed the latitude/longitude pattern, without looping. In radial saccades, the gaze point initially moved more vertically than the target direction and then curved; head trajectories were straight. 3. The eye and head components of randomly sequenced gaze shifts were not time locked to one another. The head could start moving at any time from slightly before the eye until 200 ms after, and the standard deviation of this interval could be as large as 80 ms. The head continued moving for a long (up to 400 ms) and highly variable time after the gaze error had fallen to zero. For repeated saccades between the same targets, peak eye and head velocities were directly, but very weakly, correlated; fast eye

  18. Estimating a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head from a commercial OCT device

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.

    2016-03-01

    The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.

  19. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  20. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  1. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  2. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  3. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  4. New perspectives in gaze sensitivity research.

    PubMed

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention. PMID:26582567

  5. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  6. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  7. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  8. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  9. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  10. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  11. Estimating subthreshold tumor on MRI using a 3D-DTI growth model for GBM: An adjunct to radiation therapy planning.

    PubMed

    Hathout, Leith; Patel, Vishal

    2016-08-01

    Mathematical modeling and serial magnetic resonance imaging (MRI) used to calculate patient-specific rates of tumor diffusion, D, and proliferation, ρ, can be combined to simulate glioblastoma multiforme (GBM) growth. We showed that the proportion and distribution of tumor cells below the MRI threshold are determined by the D/ρ ratio of the tumor. As most radiation fields incorporate a 1‑3 cm margin to account for subthreshold tumor, accurate characterization of subthreshold tumor aids the design of optimal radiation fields. This study compared two models: a standard one‑dimensional (1D) isotropic model and a three‑dimensional (3D) anisotropic model using the advanced imaging method of diffusion tensor imaging (DTI) ‑ with regards to the D/ρ ratio's effect on the proportion and spatial extent of the subthreshold tumor. A validated reaction‑diffusion equation accounting for tumor diffusion and proliferation modeled tumor concentration in time and space. For the isotropic and anisotropic models, nine tumors with different D/ρ ratios were grown to a T1 radius of 1.5 cm. For each tumor, the percent and extent of tumor cells beyond the T2 radius were calculated. For both models, higher D/ρ ratios were correlated with a greater proportion and extent of subthreshold tumor. Anisotropic modeling demonstrated a higher proportion and extent of subthreshold tumor than predicted by the isotropic modeling. Because the quantity and distribution of subthreshold tumor depended on the D/ρ ratio, this ratio should influence radiation field demarcation. Furthermore, the use of DTI data to account for anisotropic tumor growth allows for more refined characterization of the subthreshold tumor based on the patient-specific D/ρ ratio. PMID:27374420

  12. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  13. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  14. Eye Gaze in Creative Sign Language

    ERIC Educational Resources Information Center

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  15. Observing Shared Attention Modulates Gaze Following

    ERIC Educational Resources Information Center

    Bockler, Anne; Knoblich, Gunther; Sebanz, Natalie

    2011-01-01

    Humans' tendency to follow others' gaze is considered to be rather resistant to top-down influences. However, recent evidence indicates that gaze following depends on prior eye contact with the observed agent. Does observing two people engaging in eye contact also modulate gaze following? Participants observed two faces looking at each other or…

  16. Teachers' Responses to Children's Eye Gaze

    ERIC Educational Resources Information Center

    Doherty-Sneddon, Gwyneth; Phelps, Fiona G.

    2007-01-01

    When asked questions, children often avert their gaze. Furthermore, the frequency of such gaze aversion (GA) is related to the difficulty of cognitive processing, suggesting that GA is a good indicator of children's thinking and comprehension. However, little is known about how teachers detect and interpret such gaze signals. In Study 1 teaching…

  17. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  18. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  19. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  20. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the

  1. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method. PMID:22948355

  2. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  3. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  4. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  5. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  6. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  7. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  8. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  9. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  10. Estimation of the maximum allowable loading amount of COD in Luoyuan Bay by a 3-D COD transport and transformation model

    NASA Astrophysics Data System (ADS)

    Wu, Jialin; Li, Keqiang; Shi, Xiaoyong; Liang, Shengkang; Han, Xiurong; Ma, Qimin; Wang, Xiulin

    2014-08-01

    The rapid economic and social developments in the Luoyuan and Lianjiang counties of Fujian Province, China, raise certain environment and ecosystem issues. The unusual phytoplankton bloom and eutrophication, for example, have increased in severity in Luoyuan Bay (LB). The constant increase of nutrient loads has largely caused the environmental degradation in LB. Several countermeasures have been implemented to solve these environmental problems. The most effective of these strategies is the reduction of pollutant loadings into the sea in accordance with total pollutant load control (TPLC) plans. A combined three-dimensional hydrodynamic transport-transformation model was constructed to estimate the marine environmental capacity of chemical oxygen demand (COD). The allowed maximum loadings for each discharge unit in LB were calculated with applicable simulation results. The simulation results indicated that the environmental capacity of COD is approximately 11×104 t year-1 when the water quality complies with the marine functional zoning standards for LB. A pollutant reduction scheme to diminish the present levels of mariculture- and domestic-based COD loadings is based on the estimated marine COD environmental capacity. The obtained values imply that the LB waters could comply with the targeted water quality criteria. To meet the revised marine functional zoning standards, discharge loadings from discharge units 1 and 11 should be reduced to 996 and 3236 t year-1, respectively.

  11. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  12. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  13. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  14. Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement.

    PubMed

    Mueller, Stefanie; Fiehler, Katja

    2016-07-01

    Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets. PMID:27157885

  15. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  17. A new automatic method for estimation of magnetization and density contrast by using three-dimensional (3D) magnetic and gravity anomalies

    NASA Astrophysics Data System (ADS)

    Bektas, Ozcan; Ates, Abdullah; Aydemir, Attila

    2012-09-01

    In this paper, a new method estimating the ratio of magnetic intensity to density contrast of a body that creates magnetic and gravity anomalies is presented. Although magnetic intensity and density of an anomalous body can be measured in the laboratory from the surface samples, the proposed new method is developed to determine the magnetic intensity and density contrast from the magnetic and gravity anomalies when the surface samples are not available. In this method, density contrast diagrams of a synthetic model are produced and these diagrams are prepared as graphics where the magnetic intensity (J) is given in the vertical axis and Psg (pseudogravity)/Grv (gravity) values in horizontal axis. The density contrast diagrams can be prepared as three sub-diagrams to show the low, middle and high ranges allowing obtain density contrast of body. The proposed method is successfully tested on the synthetic models with and without error. In order to verify the results of the method, an alternative method known as root-mean-square (RMS) is also applied onto the same models to determine the density contrast. In this manner, maximum correlation between the observed gravity and calculated gravity anomalies is searched and confirmation of the results is supported with the RMS method. In order to check the reliability of the new method on the field data, the proposed method is applied to the Tetbury (England) and Hanobasi (Central Turkey) magnetic and gravity anomalies. Field models are correlated with available geological, seismic and borehole data. The results are found consistent and reliable for estimating the magnetic intensity and density contrast of the causative bodies.

  18. Piecewise-rigid 2D-3D registration for pose estimation of snake-like manipulator using an intraoperative x-ray projection

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Kutzer, M. D.; Taylor, R. H.; Armand, M.

    2014-03-01

    Background: Snake-like dexterous manipulators may offer significant advantages in minimally-invasive surgery in areas not reachable with conventional tools. Precise control of a wire-driven manipulator is challenging due to factors such as cable deformation, unknown internal (cable friction) and external forces, thus requiring correcting the calibration intraoperatively by determining the actual pose of the manipulator. Method: A method for simultaneously estimating pose and kinematic configuration of a piecewise-rigid object such as a snake-like manipulator from a single x-ray projection is presented. The method parameterizes kinematics using a small number of variables (e.g., 5), and optimizes them simultaneously with the 6 degree-of-freedom pose parameter of the base link using an image similarity between digitally reconstructed radiographs (DRRs) of the manipulator's attenuation model and the real x-ray projection. Result: Simulation studies assumed various geometric magnifications (1.2-2.6) and out-of-plane angulations (0°-90°) in a scenario of hip osteolysis treatment, which demonstrated the median joint angle error was 0.04° (for 2.0 magnification, +/-10° out-of-plane rotation). Average computation time was 57.6 sec with 82,953 function evaluations on a mid-range GPU. The joint angle error remained lower than 0.07° while out-of-plane rotation was 0°-60°. An experiment using video images of a real manipulator demonstrated a similar trend as the simulation study except for slightly larger error around the tip attributed to accumulation of errors induced by deformation around each joint not modeled with a simple pin joint. Conclusions: The proposed approach enables high precision tracking of a piecewise-rigid object (i.e., a series of connected rigid structures) using a single projection image by incorporating prior knowledge about the shape and kinematic behavior of the object (e.g., each rigid structure connected by a pin joint parameterized by a

  19. A joint data assimilation system (Tan-Tracker) to simultaneously estimate surface CO2 fluxes and 3-D atmospheric CO2 concentrations from observations

    NASA Astrophysics Data System (ADS)

    Tian, X.; Xie, Z.; Liu, Y.; Cai, Z.; Fu, Y.; Zhang, H.; Feng, L.

    2014-12-01

    We have developed a novel framework ("Tan-Tracker") for assimilating observations of atmospheric CO2 concentrations, based on the POD-based (proper orthogonal decomposition) ensemble four-dimensional variational data assimilation method (PODEn4DVar). The high flexibility and the high computational efficiency of the PODEn4DVar approach allow us to include both the atmospheric CO2 concentrations and the surface CO2 fluxes as part of the large state vector to be simultaneously estimated from assimilation of atmospheric CO2 observations. Compared to most modern top-down flux inversion approaches, where only surface fluxes are considered as control variables, one major advantage of our joint data assimilation system is that, in principle, no assumption on perfect transport models is needed. In addition, the possibility for Tan-Tracker to use a complete dynamic model to consistently describe the time evolution of CO2 surface fluxes (CFs) and the atmospheric CO2 concentrations represents a better use of observation information for recycling the analyses at each assimilation step in order to improve the forecasts for the following assimilations. An experimental Tan-Tracker system has been built based on a complete augmented dynamical model, where (1) the surface atmosphere CO2 exchanges are prescribed by using a persistent forecasting model for the scaling factors of the first-guess net CO2 surface fluxes and (2) the atmospheric CO2 transport is simulated by using the GEOS-Chem three-dimensional global chemistry transport model. Observing system simulation experiments (OSSEs) for assimilating synthetic in situ observations of surface CO2 concentrations are carefully designed to evaluate the effectiveness of the Tan-Tracker system. In particular, detailed comparisons are made with its simplified version (referred to as TT-S) with only CFs taken as the prognostic variables. It is found that our Tan-Tracker system is capable of outperforming TT-S with higher assimilation

  20. Estimation of pulmonary arterial volume changes in the normal and hypertensive fawn-hooded rat from 3D micro-CT data

    NASA Astrophysics Data System (ADS)

    Molthen, Robert C.; Wietholt, Christian; Haworth, Steven T.; Dawson, Christopher A.

    2002-04-01

    In the study of pulmonary vascular remodeling, much can be learned from observing the morphological changes undergone in the pulmonary arteries of the rat lung when exposed to chronic hypoxia or other challenges which elicit a remodeling response. Remodeling effects include thickening of vessel walls, and loss of wall compliance. Morphometric data can be used to localize the hemodynamic and functional consequences. We developed a CT imaging method for measuring the pulmonary arterial tree over a range of pressures in rat lungs. X-ray micro-focal isotropic volumetric imaging of the arterial tree in the intact rat lung provides detailed information on the size, shape and mechanical properties of the arterial network. In this study, we investigate the changes in arterial volume with step changes in pressure for both normoxic and hypoxic Fawn-Hooded (FH) rats. We show that FH rats exposed to hypoxia tend to have reduced arterial volume changes for the same preload when compared to FH controls. A secondary objective of this work is to quantify various phenotypes to better understand the genetic contribution of vascular remodeling in the lungs. This volume estimation method shows promise in high throughput phenotyping, distinguishing differences in the pulmonary hypertensive rat model.

  1. Estimation of water distribution and degradation mechanisms in polymer electrolyte membrane fuel cell gas diffusion layers using a 3D Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Seidenberger, K.; Wilhelm, F.; Schmitt, T.; Lehnert, W.; Scholta, J.

    Understanding of both water management in PEM fuel cells and degradation mechanisms of the gas diffusion layer (GDL) and their mutual impact is still at least incomplete. Different modelling approaches contribute to gain deeper insight into the processes occurring during fuel cell operation. Considering the GDL, the models can help to obtain information about the distribution of liquid water within the material. Especially, flooded regions can be identified, and the water distribution can be linked to the system geometry. Employed for material development, this information can help to increase the life time of the GDL as a fuel cell component and the fuel cell as the entire system. The Monte Carlo (MC) model presented here helps to simulate and analyse the water household in PEM fuel cell GDLs. This model comprises a three-dimensional, voxel-based representation of the GDL substrate, a section of the flowfield channel and the corresponding rib. Information on the water distribution within the substrate part of the GDL can be estimated.

  2. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  3. Referent expressions and gaze: reference type influences real-world gaze cue utilization.

    PubMed

    Macdonald, Ross G; Tatler, Benjamin W

    2015-04-01

    Gaze cues are used alongside language to communicate. Lab-based studies have shown that people reflexively follow gaze cue stimuli, however it is unclear whether this affect is present in real interactions. Language specificity influences the extent to which we utilize gaze cues in real interactions, but it is unclear whether the type of language used can similarly affect gaze cue utilization. We aimed to (a) investigate whether automatic gaze following effects are present in real-world interactions, and (b) explore how gaze cue utilization varies depending on the form of concurrent language used. Wearing a mobile eye-tracker, participants followed instructions to complete a real-world search task. The instructor varied the determiner used (featural or spatial) and the presence of gaze cues (absent, congruent, or incongruent). Congruent gaze cues were used more when provided alongside featural references. Incongruent gaze cues were initially followed no more than chance. However, unlike participants in the no-gaze condition, participants in the incongruent condition did not benefit from receiving spatial instructions over featural instructions. We suggest that although participants selectively use informative gaze cues and ignore unreliable gaze cues, visual search can nevertheless be disrupted when inherently spatial gaze cues are accompanied by contradictory verbal spatial references. PMID:25621580

  4. Image compression and decompression based on gazing area

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Endo, Chizuko; Haneishi, Hideaki; Miyake, Yoichi

    1996-04-01

    In this paper, we introduce a new method of data compression and decompression technique to search the aimed image based on the gazing area of the image. Many methods of data compression have been proposed. Particularly, JPEG compression technique has been widely used as a standard method. However, this method is not always effective to search the aimed images from the image filing system. In a previous paper, by the eye movement analysis, we found that images have a particular gazing area. It is considered that the gazing area is the most important region of the image, then we considered introducing the information to compress and transmit the image. A method named fixation based progressive image transmission is introduced to transmit the image effectively. In this method, after the gazing area is estimated, the area is first transmitted and then the other regions are transmitted. If we are not interested in the first transmitted image, then we can search other images. Therefore, the aimed image can be searched from the filing system, effectively. We compare the searching time of the proposed method with the conventional method. The result shows that the proposed method is faster than the conventional one to search the aimed image.

  5. Where am I looking? The accuracy of video-mediated gaze awareness.

    PubMed

    Gale, C; Monk, A F

    2000-04-01

    Participants worked in pairs, with one person gazing at a flat horizontal stimulus between them. The other participant estimated where the gazer was looking. Experiment 1 used linear scales as gaze targets. The mean root mean square error of estimation equates to 3.8 degrees of head-and-eye pan and 2.6 degrees of tilt. This small error of estimation was essentially the same in a video-mediated condition and in one in which a procedure that did not allow the estimator to see the head-and-eye movement to the target position was used. Experiment 2 obtained comparable gaze estimation performance in face-to-face and video-mediated conditions, using a combined pan-and-tilt grid. It is concluded that people are very good at estimating what someone else is looking at and that such estimations should be practical during video-mediated conversation. PMID:10909249

  6. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  7. Crystal ball gazing

    NASA Technical Reports Server (NTRS)

    Gettys, Jim

    1992-01-01

    Over the last seven years, the CPU on my desk has increased speed by two orders of magnitude, from around 1 MIP to more than 100 MIPS; more important is that it is about as fast as any uniprocessor of any type available for any price, for compute bound problems. Memory on the system is also about 100 times as big, while disk is only about 10 times as big. Local network and I/O performance have increased greatly, though not quite at the same rate as processor speed. More important, I will argue, is that the CPU's address space is 64 bits, rather than 32 bits, allowing us to rethink some time honored presumptions. The Internet has gone from a few hundred machines to a million, and now have grown to span the entire globe, and wide area networks have now becoming commercial services. 'PC's' are now real computers, bringing what was top of the line computing capability to the masses only a few years behind the leading edge. So even a year or two from now, we can anticipate commonplace desktop machines running at speeds hundreds of MIPS, with main memories in the hundreds of megabytes to a gigabyte, able to draw millions of vectors/second, and all capable of some reasonable 3D graphics. And only a few years later, this will be the $1500 PC. So the 1990's certainly brings: 64 bit processors becoming standard; BIP/BFLOP class uniprocessors; large scale multiprocessors for special purpose applications; I/O as the most significant computer engineering problem; Hierarchical data servers in everyday use; routine access to archived data around the world; and what else? What do systems such as those we will have this decade imply to those building data analysis systems today? Many of the presumptions of the 1970's and 1980's need to be reexamined in the light of 1990's technology.

  8. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  9. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  10. [Case of acute ophthalmoparesis with gaze nystagmus].

    PubMed

    Ikuta, Naomi; Tada, Yukiko; Koga, Michiaki

    2012-01-01

    A 61-year-old man developed double vision subsequent to diarrheal illness. Mixed horizontal-vertical gaze palsy in both eyes, diminution of tendon reflexes, and gaze nystagmus were noted. His horizontal gaze palsy was accompanied by gaze nystagmus in the abducent direction, indicative of the disturbance in central nervous system. Neither limb weakness nor ataxia was noted. Serum anti-GQ1b antibody was detected. Brain magnetic resonance imaging (MRI) findings were normal. The patient was diagnosed as having acute ophthalmoparesis. The ophthalmoparesis and nystagmus gradually disappeared in 3 months. The accompanying nystagmus suggests that central nervous system disturbance may also be present with acute ophthalmoparesis. PMID:22790807

  11. Profile of Gaze Dysfunction following Cerebrovascular Accident.

    PubMed

    Rowe, Fiona J; Wright, David; Brand, Darren; Jackson, Carole; Harrison, Shirley; Maan, Tallat; Scott, Claire; Vogwell, Linda; Peel, Sarah; Akerman, Nicola; Dodridge, Caroline; Howard, Claire; Shipman, Tracey; Sperring, Una; Macdiarmid, Sonia; Freeman, Cicely

    2013-01-01

    Aim. To evaluate the profile of ocular gaze abnormalities occurring following stroke. Methods. Prospective multicentre cohort trial. Standardised referral and investigation protocol including assessment of visual acuity, ocular alignment and motility, visual field, and visual perception. Results. 915 patients recruited: mean age 69.18 years (SD 14.19). 498 patients (54%) were diagnosed with ocular motility abnormalities. 207 patients had gaze abnormalities including impaired gaze holding (46), complete gaze palsy (23), horizontal gaze palsy (16), vertical gaze palsy (17), Parinaud's syndrome (8), INO (20), one and half syndrome (3), saccadic palsy (28), and smooth pursuit palsy (46). These were isolated impairments in 50% of cases and in association with other ocular abnormalities in 50% including impaired convergence, nystagmus, and lid or pupil abnormalities. Areas of brain stroke were frequently the cerebellum, brainstem, and diencephalic areas. Strokes causing gaze dysfunction also involved cortical areas including occipital, parietal, and temporal lobes. Symptoms of diplopia and blurred vision were present in 35%. 37 patients were discharged, 29 referred, and 141 offered review appointments. 107 reviewed patients showed full recovery (4%), partial improvement (66%), and static gaze dysfunction (30%). Conclusions. Gaze dysfunction is common following stroke. Approximately one-third of patients complain of visual symptoms, two thirds show some improvement in ocular motility. PMID:24558601

  12. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  13. Gaze-contingent motor channelling and haptic constraints for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Kwok, Ka-Wai; Darzi, Ara; Yang, Guang-Zhong

    2008-01-01

    The use of master-slave surgical robots for Minimally Invasive Surgery (MIS) has created a physical separation between the surgeon and the patient. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robotic assisted MIS procedures. This paper introduces a novel gaze contingent framework with real-time haptic feedback by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of gaze-contingent motor channelling. The method also uses 3D eye gaze to dynamically prescribe and update safety boundaries during robotic assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robotic assisted phantom procedures demonstrate the potential clinical value of the technique. PMID:18982663

  14. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  15. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  16. Gaze contingent hologram synthesis for holographic head-mounted display

    NASA Astrophysics Data System (ADS)

    Hong, Jisoo; Kim, Youngmin; Hong, Sunghee; Shin, Choonsung; Kang, Hoonjong

    2016-03-01

    Development of display and its related technologies provides immersive visual experience with head-mounted-display (HMD). However, most available HMDs provide 3D perception only by stereopsis, lack of accommodation depth cues. Recently, holographic HMD (HHMD) arises as one viable option to resolve this problem because hologram is known to provide full set of depth cues including accommodation. Moreover, by virtue of increasing computational power, hologram synthesis from 3D object represented by point cloud can be calculated in real time even with rigorous Rayleigh-Sommerfeld diffraction formula. However, in HMD, rapid gaze change of the user requires much faster refresh rate, which means that much faster hologram synthesis is indispensable in HHMD. Because the visual acuity falls off in the visual periphery, we propose here to accelerate synthesizing hologram by differentiating density of point cloud projected on the screen. We classify the screen into multiple layers which are concentric circles with different radii, where the center is aligned with gaze of user. Layer with smaller radius is closer to the region of interest, hence, assigned with higher density of point cloud. Because the computation time is directly related to the number of points in point cloud, we can accelerate synthesizing hologram by lowering density of point cloud in the visual periphery. Cognitive study reveals that user cannot discriminate those degradation in the visual periphery if the parameters are properly designed. Prototype HHMD system will be provided for verifying the feasibility of our method, and detailed design scheme will be discussed.

  17. Visualization of liver in 3-D

    NASA Astrophysics Data System (ADS)

    Chen, Chin-Tu; Chou, Jin-Shin; Giger, Maryellen L.; Kahn, Charles E., Jr.; Bae, Kyongtae T.; Lin, Wei-Chung

    1991-05-01

    Visualization of the liver in three dimensions (3-D) can improve the accuracy of volumetric estimation and also aid in surgical planning. We have developed a method for 3-D visualization of the liver using x-ray computed tomography (CT) or magnetic resonance (MR) images. This method includes four major components: (1) segmentation algorithms for extracting liver data from tomographic images; (2) interpolation techniques for both shape and intensity; (3) schemes for volume rendering and display, and (4) routines for electronic surgery and image analysis. This method has been applied to cases from a living-donor liver transplant project and appears to be useful for surgical planning.

  18. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  19. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  20. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  1. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  2. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  3. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  4. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  5. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  6. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  7. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    PubMed Central

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  8. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  9. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  10. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  11. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  13. Gaze Following: Why (Not) Learn It?

    ERIC Educational Resources Information Center

    Triesch, Jochen; Teuscher, Christof; Deak, Gedeon O.; Carlson, Eric

    2006-01-01

    We propose a computational model of the emergence of gaze following skills in infant-caregiver interactions. The model is based on the idea that infants learn that monitoring their caregiver's direction of gaze allows them to predict the locations of interesting objects or events in their environment (Moore & Corkum, 1994). Elaborating on this…

  14. Culture and Listeners' Gaze Responses to Stuttering

    ERIC Educational Resources Information Center

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  15. The Development of Mentalistic Gaze Understanding

    ERIC Educational Resources Information Center

    Doherty, Martin J.

    2006-01-01

    Very young infants are sensitive to and follow other people's gaze. By 18 months children, like chimpanzees, apparently represent the spatial relationship between viewer and object viewed: they can follow eye-direction alone, and react appropriately if the other's gaze is blocked by occluding barriers. This paper assesses when children represent…

  16. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  17. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  18. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  19. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  20. Eye gaze adaptation under interocular suppression.

    PubMed

    Stein, Timo; Peelen, Marius V; Sterzer, Philipp

    2012-01-01

    The perception of eye gaze is central to social interaction in that it provides information about another person's goals, intentions, and focus of attention. Direction of gaze has been found to reflexively shift the observer's attention in the corresponding direction, and prolonged exposure to averted eye gaze adapts the visual system, biasing perception of subsequent gaze in the direction opposite to the adapting face. Here, we tested the role of conscious awareness in coding eye gaze directions. To this end, we measured aftereffects induced by adapting faces with different eye gaze directions that were presented during continuous flash suppression, a potent interocular suppression technique. In some trials the adapting face was rendered fully invisible, whereas in others it became partially visible. In Experiment 1, the adapting and test faces were presented in identical sizes and to the same eye. Even fully invisible faces were capable of inducing significant eye gaze aftereffects, although these were smaller than aftereffects from partially visible faces. When the adapting and test faces were shown to different eyes in Experiment 2, significant eye gaze aftereffects were still observed for the fully invisible faces, thus showing interocular transfer. Experiment 3 disrupted the spatial correspondence between adapting and test faces by introducing a size change. Under these conditions, aftereffects were restricted to partially visible adapting faces. These results were replicated in Experiment 4 using a blocked adaptation design. Together, these findings indicate that size-dependent low-level components of eye gaze can be represented without awareness, whereas object-centered higher-level representations of eye gaze directions depend on visual awareness. PMID:22753441

  1. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  2. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  3. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  4. Are You Looking at Me? Measuring the Cone of Gaze

    ERIC Educational Resources Information Center

    Gamer, Matthias; Hecht, Heiko

    2007-01-01

    The processing of gaze cues plays an important role in social interactions, and mutual gaze in particular is relevant for natural as well as video-mediated communications. Mutual gaze occurs when an observer looks at or in the direction of the eyes of another person. The authors chose the metaphor of a cone of gaze to characterize this range of…

  5. Group Differences in the Mutual Gaze of Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Bard, Kim A.; Myowa-Yamakoshi, Masako; Tomonaga, Masaki; Tanaka, Masayuki; Costall, Alan; Matsuzawa, Tetsuro

    2005-01-01

    A comparative developmental framework was used to determine whether mutual gaze is unique to humans and, if not, whether common mechanisms support the development of mutual gaze in chimpanzees and humans. Mother-infant chimpanzees engaged in approximately 17 instances of mutual gaze per hour. Mutual gaze occurred in positive, nonagonistic…

  6. Use of 3D DCE-MRI for the estimation of renal perfusion and glomerular filtration rate: an intrasubject comparison of FLASH and KWIC with a comprehensive framework for evaluation.

    PubMed

    Eikefjord, Eli; Andersen, Erling; Hodneland, Erlend; Zöllner, Frank; Lundervold, Arvid; Svarstad, Einar; Rørvik, Jarle

    2015-03-01

    OBJECTIVE. The purpose of this article is to compare two 3D dynamic contrast-enhanced (DCE) MRI measurement techniques for MR renography, a radial k-space weighted image contrast (KWIC) sequence and a cartesian FLASH sequence, in terms of intrasubject differences in estimates of renal functional parameters and image quality characteristics. SUBJECTS AND METHODS. Ten healthy volunteers underwent repeated breath-hold KWIC and FLASH sequence examinations with temporal resolutions of 2.5 and 2.8 seconds, respectively. A two-compartment model was used to estimate MRI-derived perfusion parameters and glomerular filtration rate (GFR). The latter was compared with the iohexol GFR and the estimated GFR. Image quality was assessed using a visual grading characteristic analysis of relevant image quality criteria and signal-to-noise ratio calculations. RESULTS. Perfusion estimates from FLASH were closer to literature reference values than were the KWIC sequences. In relation to the iohexol GFR (mean [± SD], 103 ± 11 mL/min/1.73 m(2)), KWIC produced significant underestimations and larger bias in GFR values (mean, 70 ± 30 mL/min/1.73 m(2); bias = -33.2 mL/min/1.73 m(2)) compared with the FLASH GFR (110 ± 29 mL/min/1.73 m(2); bias = 6.4 mL/min/1.73 m(2)). KWIC was statistically significantly (p < 0.005) more impaired by artifacts than was FLASH (AUC = 0.18). The average signal-enhancement ratio (delta ratio) in the cortex was significantly lower for KWIC (delta ratio = 0.99) than for FLASH (delta ratio = 1.40). Other visually graded image quality characteristics and signal-to-noise ratio measurements were not statistically significantly different. CONCLUSION. Using the same postprocessing scheme and pharmacokinetic model, FLASH produced more accurate perfusion and filtration parameters than did KWIC compared with clinical reference methods. Our data suggest an apparent relationship between image quality characteristics and the degree of stability in the numeric model

  7. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  8. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  9. Reading the mind from eye gaze.

    PubMed

    Calder, Andrew J; Lawrence, Andrew D; Keane, Jill; Scott, Sophie K; Owen, Adrian M; Christoffels, Ingrid; Young, Andrew W

    2002-01-01

    Baron-Cohen [Mindblindness: an essay on autism and theory of mind. Cambridge, MA: MIT Press, 1997] has suggested that the interpretation of gaze plays an important role in a normal functioning theory of mind (ToM) system. Consistent with this suggestion, functional imaging research has shown that both ToM tasks and eye gaze processing engage a similar region of the posterior superior temporal sulcus (STS). However, a second brain region associated with ToM, the medial prefrontal (MPF) cortex, has not been identified by previous eye gaze studies. We discuss the methodological issues that may account for the absence of MPF activation in these experiments and present a PET study that controls for these factors. Our experiment included three conditions in which the proportions of faces gazing at, and away from, the participant, were as follows: 100% direct [0% averted], 50% direct-50% averted, and 100% horizontally averted [0% direct]. Two control conditions were also included in which the faces' gaze were averted down, or their eyes were closed. Contrasts comparing the gaze conditions with each of the control conditions revealed medial frontal involvement. Parametric analyses showed a significant linear relationship between increasing proportions of horizontally averted gaze and increased rCBF in the MPF cortex. The opposite parametric analysis (increasing proportions of direct gaze) was associated with increased rCBF in a number of areas including the superior and medial temporal gyri. Additional subtraction contrasts largely confirmed these patterns. Our results demonstrate a considerable degree of overlap between the medial frontal areas involved in eye gaze processing and theory of mind tasks. PMID:11931917

  10. Gazing at me: the importance of social meaning in understanding direct-gaze cues.

    PubMed

    de C Hamilton, Antonia F

    2016-01-19

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. PMID:26644598

  11. Origin of hepatitis C virus genotype 3 in Africa as estimated through an evolutionary analysis of the full-length genomes of nine subtypes, including the newly sequenced 3d and 3e.

    PubMed

    Li, Chunhua; Lu, Ling; Murphy, Donald G; Negro, Francesco; Okamoto, Hiroaki

    2014-08-01

    We characterized the full-length genomes of nine hepatitis C virus genotype 3 (HCV-3) isolates: QC7, QC8, QC9, QC10, QC34, QC88, NE145, NE274 and 811. To the best of our knowledge, NE274 and NE145 were the first full-length genomes for confirming the provisionally assigned subtypes 3d and 3e, respectively, whereas 811 represented the first HCV-3 isolate that had its extreme 3' UTR terminus sequenced. Based on these full-length genomes, together with 42 references representing eight assigned subtypes and an unclassified variant of HCV-3, and 10 sequences of six other genotypes, a timescaled phylogenetic tree was reconstructed after an evolutionary analysis using a coalescent Bayesian procedure. The results indicated that subtypes 3a, 3d and 3e formed a subset with a common ancestor dated to ~202.89 [95% highest posterior density (HPD): 160.11, 264.6] years ago. The analysis of all of the HCV-3 sequences as a single lineage resulted in the dating of the divergence time to ~457.81 (95% HPD: 350.62, 587.53) years ago, whereas the common ancestor of all of the seven HCV genotypes dated to ~780.86 (95% HPD: 592.15, 1021.34) years ago. As subtype 3h and the unclassified variant were relatives, and represented the oldest HCV-3 lineages with origins in Africa and the Middle East, these findings may indicate the ancestral origin of HCV-3 in Africa. We speculate that the ancestral HCV-3 strains may have been brought to South Asia from Africa by land and/or across the sea to result in its indigenous circulation in that region. The spread was estimated to have occurred in the era after Vasco da Gama had completed his expeditions by sailing along the eastern coast of Africa to India. However, before this era, Arabians had practised slave trading from Africa to the Middle East and South Asia for centuries, which may have mediated the earliest spread of HCV-3. PMID:24795446

  12. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  13. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  14. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  15. A comparison of geometric- and regression-based mobile gaze-tracking

    PubMed Central

    Browatzki, Björn; Bülthoff, Heinrich H.; Chuang, Lewis L.

    2014-01-01

    Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation. PMID:24782737

  16. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  17. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  18. Interaction of perception and gaze control in autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Pellkofer, Martin; Luetzeler, Michael; Dickmanns, Ernst D.

    2001-10-01

    For robust and secure behavior in natural environment an autonomous vehicle needs an elaborate vision sensor as main source of information. The vision sensor must be adaptable to the external situation, the mission, the capabilities of the vehicle and the knowledge about the external world accumulated up to the present time. In the EMS-Vision system, this vision sensor consists of four cameras with different focal lengths mounted on a highly dynamic pan-tilt camera head. Image processing, gaze control and behavior decision interact with each other in a closed loop. The image processing experts specify so-called regions of attention (RoAs) for each object in 3D object coordinates. These RoAs should be visible with a resolution as required by the measurement techniques applied. The behavior decision module specifies the relevance of obstacles like road segments, crossings or landmarks in the situation context. The gaze control unit takes all this information in order to plan, optimize and perform a sequence of smooth pursuits, interrupted by saccades. The sequence with the best information gain is performed. The information gain depends on the relevance of objects or object parts, the duration of smooth pursuit maneuvers, the quality of perception and the number of saccades. The functioning of the EMS-Vision system is demonstrated in a complex and scalable autonomous mission with the UBM test vehicle VAMORS.

  19. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  20. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  1. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI. PMID:20580646

  2. A new neural net approach to robot 3D perception and visuo-motor coordination

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  3. Infants’ Developing Understanding of Social Gaze

    PubMed Central

    Beier, Jonathan S.; Spelke, Elizabeth S.

    2012-01-01

    Young infants are sensitive to self-directed social actions, but do they appreciate the intentional, target-directed nature of such behaviors? We addressed this question by investigating infants’ understanding of social gaze in third-party interactions (N = 104). Ten-month-old infants discriminated between two people in mutual versus averted gaze, and expected a person to look at her social partner during conversation. In contrast, 9-month-old infants showed neither ability, even when provided with information that highlighted the gazer's social goals. These results indicate considerable improvement in infants’ abilities to analyze the social gaze of others towards the end of their first year, which may relate to their appreciation of gaze as both a social and goal-directed action. PMID:22224547

  4. 3-D inversion of magnetotelluric Phase Tensor

    NASA Astrophysics Data System (ADS)

    Patro, Prasanta; Uyeshima, Makoto

    2010-05-01

    Three-dimensional (3-D) inversion of the magnetotelluric (MT) has become a routine practice among the MT community due to progress of algorithms for 3-D inverse problems (e.g. Mackie and Madden, 1993; Siripunvaraporn et al., 2005). While availability of such 3-D inversion codes have increased the resolving power of the MT data and improved the interpretation, on the other hand, still the galvanic effects poses difficulties in interpretation of resistivity structure obtained from the MT data. In order to tackle the galvanic distortion of MT data, Caldwell et al., (2004) introduced the concept of phase tensor. They demonstrated how the regional phase information can be retrieved from the observed impedance tensor without any assumptions for structural dimension, where both the near surface inhomogeneity and the regional conductivity structures can be 3-D. We made an attempt to modify a 3-D inversion code (Siripunvaraporn et al., 2005) to directly invert the phase tensor elements. We present here the main modification done in the sensitivity calculation and then show a few synthetic studies and its application to the real data. The synthetic model study suggests that the prior model (m_0) setting is important in retrieving the true model. This is because estimation of correct induction scale length lacks in the phase tensor inversion process. Comparison between results from conventional impedance inversion and new phase tensor inversion suggests that, in spite of presence of the galvanic distortion (due to near surface checkerboard anomalies in our case), the new inverion algorithm retrieves the regional conductivitity structure reliably. We applied the new inversion to the real data from the Indian sub continent and compared with the results from conventional impedance inversion.

  5. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  6. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  7. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  8. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  9. Re-encountering individuals who previously engaged in joint gaze modulates subsequent gaze cueing.

    PubMed

    Dalmaso, Mario; Edwards, S Gareth; Bayliss, Andrew P

    2016-02-01

    We assessed the extent to which previous experience of joint gaze with people (i.e., looking toward the same object) modulates later gaze cueing of attention elicited by those individuals. Participants in Experiments 1 and 2a/b first completed a saccade/antisaccade task while a to-be-ignored face either looked at, or away from, the participants' eye movement target. Two faces always engaged in joint gaze with the participant, whereas 2 other faces never engaged in joint gaze. Then, we assessed standard gaze cueing in response to these faces to ascertain the effect of these prior interactions on subsequent social attention episodes. In Experiment 1, the face's eyes moved before the participant's target appeared, meaning that the participant always gaze-followed 2 faces and never gaze-followed 2 other faces. We found that this prior experience modulated the timecourse of subsequent gaze cueing. In Experiments 2a/b, the participant looked at the target first, then was either followed (i.e., the participant initiated joint gaze), or was not followed. These participants then showed an overall decrement of gaze cueing with individuals who had previously followed participants' eyes (Experiment 2a), an effect that was associated with autism spectrum quotient scores and modulated perceived trustworthiness of the faces (Experiment 2b). Experiment 3 demonstrated that these modulations are unlikely to be because of the association of different levels of task difficulty with particular faces. These findings suggest that establishing joint gaze with others influences subsequent social attention processes that are generally thought to be relatively insensitive to learning from prior episodes. PMID:26237618

  10. The look of love: gaze shifts and person perception.

    PubMed

    Mason, Malia F; Tatkow, Elizabeth P; Macrae, C Neil

    2005-03-01

    Gaze direction is a vital communicative channel through which people transmit information to each other. By signaling the locus of social attention, gaze cues convey information about the relative importance of objects, including other people, in the environment. For the most part, this information is communicated via patterns of gaze direction, with gaze shifts signaling changes in the objects of attention. Noting the relevance of gaze cues in social cognition, we speculated that gaze shifts may modulate people's evaluations of others. We investigated this possibility by asking participants to judge the likability (Experiment 1) and physical attractiveness (Experiment 2) of targets displaying gaze shifts indicative of attentional engagement or disengagement with the participants. As expected, person evaluation was moderated by the direction of gaze shifts, but only when the judgment under consideration was relevant to participants. We consider how and when gaze shifts may modulate person perception and its associated behavioral products. PMID:15733205

  11. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  12. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  13. Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.

    PubMed

    Mylonas, George P; Kwok, Ka-Wai; James, David R C; Leff, Daniel; Orihuela-Espina, Felipe; Darzi, Ara; Yang, Guang-Zhong

    2012-04-01

    The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed. PMID:20889367

  14. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    PubMed Central

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  15. Prediction-learning in infants as a mechanism for gaze control during object exploration

    PubMed Central

    Schlesinger, Matthew; Johnson, Scott P.; Amso, Dima

    2014-01-01

    We are pursuing the hypothesis that visual exploration and learning in young infants is achieved by producing gaze-sample sequences that are sequentially predictable. Our recent analysis of infants’ gaze patterns during image free-viewing (Schlesinger and Amso, 2013) provides support for this idea. In particular, this work demonstrates that infants’ gaze samples are more easily learnable than those produced by adults, as well as those produced by three artificial-observer models. In the current study, we extend these findings to a well-studied object-perception task, by investigating 3-month-olds’ gaze patterns as they view a moving, partially occluded object. We first use infants’ gaze data from this task to produce a set of corresponding center-of-gaze (COG) sequences. Next, we generate two simulated sets of COG samples, from image-saliency and random-gaze models, respectively. Finally, we generate learnability estimates for the three sets of COG samples by presenting each as a training set to an SRN. There are two key findings. First, as predicted, infants’ COG samples from the occluded-object task are learned by a pool of simple recurrent networks faster than the samples produced by the yoked, artificial-observer models. Second, we also find that resetting activity in the recurrent layer increases the network’s prediction errors, which further implicates the presence of temporal structure in infants’ COG sequences. We conclude by relating our findings to the role of image-saliency and prediction-learning during the development of object perception. PMID:24904460

  16. Re-Encountering Individuals Who Previously Engaged in Joint Gaze Modulates Subsequent Gaze Cueing

    ERIC Educational Resources Information Center

    Dalmaso, Mario; Edwards, S. Gareth; Bayliss, Andrew P.

    2016-01-01

    We assessed the extent to which previous experience of joint gaze with people (i.e., looking toward the same object) modulates later gaze cueing of attention elicited by those individuals. Participants in Experiments 1 and 2a/b first completed a saccade/antisaccade task while a to-be-ignored face either looked at, or away from, the participants'…

  17. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  18. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  19. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  20. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  1. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  2. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  3. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  4. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  5. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  6. The mediating effects of facial expression on spatial interference between gaze direction and gaze location.

    PubMed

    Jones, Steve

    2015-01-01

    Gaze direction is an important social cue that interacts with facial expression. Cañadas and Lupiáñez (2012) reported a reverse-congruency effect such that identification of gaze direction was faster when a face was presented to the left but with the eyes directed to the right, or vice versa. In two experiments, this effect is replicated and then extended to explore the relationship between this effect and facial expression. Results show that the reverse-congruency effect is replicable with speeded gaze-direction identification, and that the effect is mediated by facial expression. The reverse-congruency effect is similar for happy and angry faces, but was not found for fearful faces. Findings are discussed in relation to the similarity of processing of incongruent gaze direction and the processing of direct gaze. PMID:25832740

  7. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  8. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  9. 3D surface analysis and classification in neuroimaging segmentation.

    PubMed

    Zagar, Martin; Mlinarić, Hrvoje; Knezović, Josip

    2011-06-01

    This work emphasizes new algorithms for 3D edge and corner detection used in surface extraction and new concept of image segmentation in neuroimaging based on multidimensional shape analysis and classification. We propose using of NifTI standard for describing input data which enables interoperability and enhancement of existing computing tools used widely in neuroimaging research. In methods section we present our newly developed algorithm for 3D edge and corner detection, together with the algorithm for estimating local 3D shape. Surface of estimated shape is analyzed and segmented according to kernel shapes. PMID:21755723

  10. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  11. Integrated Biogeomorphological Modeling Using Delft3D

    NASA Astrophysics Data System (ADS)

    Ye, Q.; Jagers, B.

    2011-12-01

    The skill of numerical morphological models has improved significantly from the early 2D uniform, total load sediment models (with steady state or infrequent wave updates) to recent 3D hydrodynamic models with multiple suspended and bed load sediment fractions and bed stratigraphy (online coupled with waves). Although there remain many open questions within this combined field of hydro- and morphodynamics, we observe an increasing need to include biological processes in the overall dynamics. In riverine and inter-tidal environments, there is often an important influence by riparian vegetation and macrobenthos. Over the past decade more and more researchers have started to extend the simulation environment with wrapper scripts and other quick code hacks to estimate their influence on morphological development in coastal, estuarine and riverine environments. Although one can in this way quickly analyze different approaches, these research tools have generally not been designed with reuse, performance and portability in mind. We have now implemented a reusable, flexible, and efficient two-way link between the Delft3D open source framework for hydrodynamics, waves and morphology, and the water quality and ecology modules. The same link will be used for 1D, 2D and 3D modeling on networks and both structured and unstructured grids. We will describe the concepts of the overall system, and illustrate it with some first results.

  12. Atypical face gaze in autism.

    PubMed

    Trepagnier, Cheryl; Sebrechts, Marc M; Peterson, Rebecca

    2002-06-01

    An eye-tracking study of face and object recognition was conducted to clarify the character of face gaze in autistic spectrum disorders. Experimental participants were a group of individuals diagnosed with Asperger's disorder or high-functioning autistic disorder according to their medical records and confirmed by the Autism Diagnostic Interview-Revised (ADI-R). Controls were selected on the basis of age, gender, and educational level to be comparable to the experimental group. In order to maintain attentional focus, stereoscopic images were presented in a virtual reality (VR) headset in which the eye-tracking system was installed. Preliminary analyses show impairment in face recognition, in contrast with equivalent and even superior performance in object recognition among participants with autism-related diagnoses, relative to controls. Experimental participants displayed less fixation on the central face than did control-group participants. The findings, within the limitations of the small number of subjects and technical difficulties encountered in utilizing the helmet-mounted display, suggest an impairment in face processing on the part of the individuals in the experimental group. This is consistent with the hypothesis of disruption in the first months of life, a period that may be critical to typical social and cognitive development, and has important implications for selection of appropriate targets of intervention. PMID:12123243

  13. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  14. Timescales of quartz crystallization estimated from glass inclusion faceting using 3D propagation phase-contrast x-ray tomography: examples from the Bishop (California, USA) and Oruanui (Taupo Volcanic Zone, New Zealand) Tuffs

    NASA Astrophysics Data System (ADS)

    Pamukcu, A.; Gualda, G. A.; Anderson, A. T.

    2012-12-01

    Compositions of glass inclusions have long been studied for the information they provide on the evolution of magma bodies. Textures - sizes, shapes, positions - of glass inclusions have received less attention, but they can also provide important insight into magmatic processes, including the timescales over which magma bodies develop and erupt. At magmatic temperatures, initially round glass inclusions will become faceted (attain a negative crystal shape) through the process of dissolution and re-precipitation, such that the extent to which glass inclusions are faceted can be used to estimate timescales. The size and position of the inclusion within a crystal will influence how much faceting occurs: a larger inclusion will facet more slowly; an inclusion closer to the rim will have less time to facet. As a result, it is critical to properly document the size, shape, and position of glass inclusions to assess faceting timescales. Quartz is an ideal mineral to study glass inclusion faceting, as Si is the only diffusing species of concern, and Si diffusion rates are relatively well-constrained. Faceting time calculations to date (Gualda et al., 2012) relied on optical microscopy to document glass inclusions. Here we use 3D propagation phase-contrast x-ray tomography to image glass inclusions in quartz. This technique enhances inclusion edges such that images can be processed more successfully than with conventional tomography. We have developed a set of image processing tools to isolate inclusions and more accurately obtain information on the size, shape, and position of glass inclusions than with optical microscopy. We are studying glass inclusions from two giant tuffs. The Bishop Tuff is ~1000 km3 of high-silica rhyolite ash fall, ignimbrite, and intracaldera deposits erupted ~760 ka in eastern California (USA). Glass inclusions in early-erupted Bishop Tuff range from non-faceted to faceted, and faceting times determined using both optical microscopy and x

  15. Inferential modeling of 3D chromatin structure

    PubMed Central

    Wang, Siyu; Xu, Jinbo; Zeng, Jianyang

    2015-01-01

    For eukaryotic cells, the biological processes involving regulatory DNA elements play an important role in cell cycle. Understanding 3D spatial arrangements of chromosomes and revealing long-range chromatin interactions are critical to decipher these biological processes. In recent years, chromosome conformation capture (3C) related techniques have been developed to measure the interaction frequencies between long-range genome loci, which have provided a great opportunity to decode the 3D organization of the genome. In this paper, we develop a new Bayesian framework to derive the 3D architecture of a chromosome from 3C-based data. By modeling each chromosome as a polymer chain, we define the conformational energy based on our current knowledge on polymer physics and use it as prior information in the Bayesian framework. We also propose an expectation-maximization (EM) based algorithm to estimate the unknown parameters of the Bayesian model and infer an ensemble of chromatin structures based on interaction frequency data. We have validated our Bayesian inference approach through cross-validation and verified the computed chromatin conformations using the geometric constraints derived from fluorescence in situ hybridization (FISH) experiments. We have further confirmed the inferred chromatin structures using the known genetic interactions derived from other studies in the literature. Our test results have indicated that our Bayesian framework can compute an accurate ensemble of 3D chromatin conformations that best interpret the distance constraints derived from 3C-based data and also agree with other sources of geometric constraints derived from experimental evidence in the previous studies. The source code of our approach can be found in https://github.com/wangsy11/InfMod3DGen. PMID:25690896

  16. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  17. Visual Foraging With Fingers and Eye Gaze

    PubMed Central

    Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni

    2016-01-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  18. Social orienting in gaze leading: a mechanism for shared attention.

    PubMed

    Edwards, S Gareth; Stephenson, Lisa J; Dalmaso, Mario; Bayliss, Andrew P

    2015-08-01

    Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to 'gaze following', attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that 'follows' the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish 'shared attention' and maintain the ongoing interaction. PMID:26180071

  19. Eye gaze is not coded by cardinal mechanisms alone

    PubMed Central

    Cheleski, Dominic J.; Mareschal, Isabelle; Calder, Andrew J.; Clifford, Colin W. G.

    2013-01-01

    Gaze is an important social cue in regulating human and non-human interactions. In this study, we employed an adaptation paradigm to examine the mechanisms underlying the perception of another's gaze. Previous research has shown that the interleaved presentation of leftwards and rightwards gazing adaptor stimuli results in observers judging a wider range of gaze deviations as being direct. We applied a similar paradigm to examine how human observers encode oblique (e.g. upwards and to the left) directions of gaze. We presented observers with interleaved gaze adaptors and examined whether adaptation differed between congruent (adaptor and test along same axis) and incongruent conditions. We find greater adaptation in congruent conditions along cardinal (horizontal and vertical) and non-cardinal (oblique) directions suggesting gaze is not coded alone by cardinal mechanisms. Our results suggest that the functional aspects of gaze processing might parallel that of basic visual features such as orientation. PMID:23782886

  20. Attentional shift by gaze is triggered without awareness.

    PubMed

    Sato, Wataru; Okada, Takashi; Toichi, Motomi

    2007-10-01

    Reflexive attentional shift in response to another individual's gaze direction has been reported, but it remains unknown whether this process can occur subliminally. We investigated this issue using facial stimuli consisting of drawings (Experiment 1) and photographs (Experiment 2). The gaze direction was expressed by the eye gaze direction (Experiment 1), and the eye gaze and head direction (Experiment 2). The gaze cue was presented either supraliminally or subliminally in the center of the visual field, before target presentation in the periphery. The task for participants was to localize the target as soon as possible. The reaction time needed to localize the target was consistently shorter for valid than invalid gaze cues for both types of gaze cues in both subliminal and supraliminal conditions. These findings indicate that attentional shift can be triggered even without awareness in response to another individual's eye gaze or head direction. PMID:17624520

  1. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  2. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  3. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  4. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  5. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  6. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  7. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  8. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  9. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  10. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  11. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  12. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  13. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  14. Intermediate view synthesis for eye-gazing

    NASA Astrophysics Data System (ADS)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  15. A Direct Link between Gaze Perception and Social Attention

    ERIC Educational Resources Information Center

    Bayliss, Andrew P.; Bartlett, Jessica; Naughtin, Claire K.; Kritikos, Ada

    2011-01-01

    How information is exchanged between the cognitive mechanisms responsible for gaze perception and social attention is unclear. These systems could be independent; the "gaze cueing" effect could emerge from the activation of a general-purpose attentional mechanism that is ignorant of the social nature of the gaze cue. Alternatively, orienting to…

  16. Children with ASD Can Use Gaze to Map New Words

    ERIC Educational Resources Information Center

    Bean Ellawadi, Allison; McGregor, Karla K.

    2016-01-01

    Background: The conclusion that children with autism spectrum disorders (ASD) do not use eye gaze in the service of word learning is based on one-trial studies. Aims: To determine whether children with ASD come to use gaze in the service of word learning when given multiple trials with highly reliable eye-gaze cues. Methods & Procedures:…

  17. Gaze Perception Requires Focused Attention: Evidence from an Interference Task

    ERIC Educational Resources Information Center

    Burton, A. Mike; Bindemann, Markus; Langton, Stephen R. H.; Schweinberger, Stefan R.; Jenkins, Rob

    2009-01-01

    The direction of another person's gaze is difficult to ignore when presented at the center of attention. In 6 experiments, perception of unattended gaze was investigated. Participants made directional (left-right) judgments to gazing-face or pointing-hand targets, which were accompanied by a distractor face or hand. Processing of the distractor…

  18. Gaze Following Is Modulated by Expectations Regarding Others’ Action Goals

    PubMed Central

    Perez-Osorio, Jairo; Müller, Hermann J.; Wiese, Eva; Wykowska, Agnieszka

    2015-01-01

    Humans attend to social cues in order to understand and predict others’ behavior. Facial expressions and gaze direction provide valuable information to infer others’ mental states and intentions. The present study examined the mechanism of gaze following in the context of participants’ expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following) was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal). In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements). The essential pattern of results of Experiment 1 was replicated, with the gaze-cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent. PMID:26606534

  19. A 3-D SAR approach to IFSAR processing

    SciTech Connect

    DOERRY,ARMIN W.; BICKEL,DOUGLAS L.

    2000-03-01

    Interferometric SAR (IFSAR) can be shown to be a special case of 3-D SAR image formation. In fact, traditional IFSAR processing results in the equivalent of merely a super-resolved, under-sampled, 3-D SAR image. However, when approached as a 3-D SAR problem, a number of IFSAR properties and anomalies are easily explained. For example, IFSAR decorrelation with height is merely ordinary migration in 3-D SAR. Consequently, treating IFSAR as a 3-D SAR problem allows insight and development of proper motion compensation techniques and image formation operations to facilitate optimal height estimation. Furthermore, multiple antenna phase centers and baselines are easily incorporated into this formulation, providing essentially a sparse array in the elevation dimension. This paper shows the Polar Format image formation algorithm extended to 3 dimensions, and then proceeds to apply it to the IFSAR collection geometry. This suggests a more optimal reordering of the traditional IFSAR processing steps.

  20. The PRISM3D paleoenvironmental reconstruction

    USGS Publications Warehouse

    Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.

    2010-01-01

    The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may

  1. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  2. Eye-gaze driven surgical workflow segmentation.

    PubMed

    James, A; Vieira, D; Lo, B; Darzi, A; Yang, G Z

    2007-01-01

    In today's climate of clinical governance there is growing pressure on surgeons to demonstrate their competence, improve standards and reduce surgical errors. This paper presents a study on developing a novel eye-gaze driven technique for surgical assessment and workflow recovery. The proposed technique investigates the use of a Parallel Layer Perceptor (PLP) to automate the recognition of a key surgical step in a porcine laparoscopic cholecystectomy model. The classifier is eye-gaze contingent but combined with image based visual feature detection for improved system performance. Experimental results show that by fusing image instrument likelihood measures, an overall classification accuracy of 75% is achieved. PMID:18044559

  3. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  4. Orienting in Response to Gaze and the Social Use of Gaze among Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rombough, Adrienne; Iarocci, Grace

    2013-01-01

    Potential relations between gaze cueing, social use of gaze, and ability to follow line of sight were examined in children with autism and typically developing peers. Children with autism (mean age = 10 years) demonstrated intact gaze cueing. However, they preferred to follow arrows instead of eyes to infer mental state, and showed decreased…

  5. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  6. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  7. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  8. 3-D target-based distributed smart camera network localization.

    PubMed

    Kassebaum, John; Bulusu, Nirupama; Feng, Wu-Chi

    2010-10-01

    For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area. PMID:20679031

  9. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  10. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  11. Spatial interference between gaze direction and gaze location: a study on the eye contact effect.

    PubMed

    Cañadas, Elena; Lupiáñez, Juan

    2012-01-01

    Perceived gaze in faces is an important social cue that influences spatial orienting of attention. In three experiments, we examined whether the social relevance of gaze direction modulated spatial interference in response selection, using three different stimuli: faces, isolated eyes, and symbolic eyes (Experiments 1, 2, and 3, respectively). Each experiment employed a variant of the spatial Stroop paradigm in which face location and gaze direction were put into conflict. Results showed a reverse congruency effect between face location to the right or left of fixation and gaze direction only for stimuli with a social meaning to participants (Experiments 1 and 2). The opposite was observed for the nonsocial stimuli used in Experiment 3. Results are explained as facilitation in response to eye contact. PMID:22530703

  12. How the gaze of others influences object processing.

    PubMed

    Becchio, Cristina; Bertone, Cesare; Castiello, Umberto

    2008-07-01

    An aspect of gaze processing, which so far has been given little attention, is the influence that intentional gaze processing can have on object processing. Converging evidence from behavioural neuroscience and developmental psychology strongly suggests that objects falling under the gaze of others acquire properties that they would not display if not looked at. Specifically, observing another person gazing at an object enriches that object of motor, affective and status properties that go beyond its chemical or physical structure. A conceptual analysis of available evidence leads to the conclusion that gaze has the potency to transfer to the object the intentionality of the person looking at it. PMID:18555735

  13. Improving usability for video analysis using gaze-based interaction

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Peinsipp-Byma, Elisabeth; Klaus, Edmund

    2012-06-01

    In this contribution, we propose the use of eye tracking technology to support video analysts. To reduce workload, we implemented two new interaction techniques as a substitute for mouse pointing: gaze-based selection of a video of interest from a set of video streams, and gaze-based selection of moving targets in videos. First results show that the multi-modal interaction technique gaze + key press allows the selection of fast moving objects in a more effective way. Moreover, we discuss further application possibilities like gaze behavior analysis to measure the analyst's fatigue, or analysis of the gaze behavior of expert analysts to instruct novices.

  14. Stabilization of gaze during circular locomotion in light. I. Compensatory head and eye nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. A rhesus and cynomolgus monkey were trained to run around the perimeter of a circular platform in light. We call this "circular locomotion" because forward motion had an angular component. Head and body velocity in space were recorded with angular rate sensors and eye movements with electrooculography (EOG). From these measurements we derived signals related to the angular velocity of the eyes in the head (Eh), of the head on the body (Hb), of gaze on the body (Gb), of the body in space (Bs), of gaze in space (Gs), and of the gain of gaze (Gb/Bs). 2. The monkeys had continuous compensatory nystagmus of the head and eyes while running, which stabilized Gs during the slow phases. The eyes established and maintained compensatory gaze velocities at the beginning and end of the slow phases. The head contributed to gaze velocity during the middle of the slow phases. Slow phase Gb was as high as 250 degrees/s, and targets were fixed for gaze angles as large as 90-140 degrees. 3. Properties of the visual surround affected both the gain and strategy of gaze compensation in the one monkey tested. Gains of Eh ranged from 0.3 to 1.1 during compensatory gaze nystagmus. Gains of Hb varied around 0.3 (0.2-0.7), building to a maximum as Eh dropped while running past sectors of interest. Consistent with predictions, gaze gains varied from below to above unity, when translational and angular body movements with regard to the target were in opposite or the same directions, respectively. 4. Gaze moved in saccadic shifts in the direction of running during quick phases. Most head quick phases were small, and at times the head only paused during an eye quick phase. Eye quick phases were larger, ranging up to 60 degrees. This is larger than quick phases during passive rotation or saccades made with the head fixed. 5. These data indicate that head and eye nystagmus are natural phenomena that support gaze compensation during locomotion. Despite differential utilization of the head and

  15. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  16. Humans Have an Expectation That Gaze Is Directed Toward Them

    PubMed Central

    Mareschal, Isabelle; Calder, Andrew J.; Clifford, Colin W.G.

    2013-01-01

    Summary Many animals use cues from another animal’s gaze to help distinguish friend from foe [1–3]. In humans, the direction of someone’s gaze provides insight into their focus of interest and state of mind [4] and there is increasing evidence linking abnormal gaze behaviors to clinical conditions such as schizophrenia and autism [5–11]. This fundamental role of another’s gaze is buoyed by the discovery of specific brain areas dedicated to encoding directions of gaze in faces [12–14]. Surprisingly, however, very little is known about how others’ direction of gaze is interpreted. Here we apply a Bayesian framework that has been successfully applied to sensory and motor domains [15–19] to show that humans have a prior expectation that other people’s gaze is directed toward them. This expectation dominates perception when there is high uncertainty, such as at night or when the other person is wearing sunglasses. We presented participants with synthetic faces viewed under high and low levels of uncertainty and manipulated the faces by adding noise to the eyes. Then, we asked the participants to judge relative gaze directions. We found that all participants systematically perceived the noisy gaze as being directed more toward them. This suggests that the adult nervous system internally represents a prior for gaze and highlights the importance of experience in developing our interpretation of another’s gaze. PMID:23562265

  17. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  18. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  19. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  20. Helping Children Think: Gaze Aversion and Teaching

    ERIC Educational Resources Information Center

    Phelps, Fiona G.; Doherty-Sneddon, Gwyneth; Warnock, Hannah

    2006-01-01

    Looking away from an interlocutor's face during demanding cognitive activity can help adults answer challenging arithmetic and verbal-reasoning questions (Glenberg, Schroeder, & Robertson, 1998). However, such "gaze aversion" (GA) is poorly applied by 5-year-old school children (Doherty-Sneddon, Bruce, Bonner, Longbotham, & Doyle, 2002). In…

  1. The Spectator's Dancing Gaze in Moulin Rouge!

    ERIC Educational Resources Information Center

    Parfitt, Clare

    2005-01-01

    This paper examines the ways in which the choreography of "Moulin Rouge!" offers a range of gazes to the audience through the perspectives of both the characters and the camera itself. Various gendered and imperial/colonial power relationships that occur within the narrative are heightened in the choreography by referring to discourses inscribed…

  2. Focusing the Gaze: Teacher Interrogation of Practice

    ERIC Educational Resources Information Center

    Nayler, Jennifer M.; Keddie, Amanda

    2007-01-01

    Within an Australian context of diminishing opportunities for equitable educational outcomes, this paper calls for teacher engagement in a "politics of resistance" through their focused gaze in relation to the ways in which they are positioned in their everyday practice. Our belief is that the resultant knowledge might equip teachers to see more…

  3. Infants' Developing Understanding of Social Gaze

    ERIC Educational Resources Information Center

    Beier, Jonathan S.; Spelke, Elizabeth S.

    2012-01-01

    Young infants are sensitive to self-directed social actions, but do they appreciate the intentional, target-directed nature of such behaviors? The authors addressed this question by investigating infants' understanding of social gaze in third-party interactions (N = 104). Ten-month-old infants discriminated between 2 people in mutual versus…

  4. Gaze Patterns and Audiovisual Speech Enhancement

    ERIC Educational Resources Information Center

    Yi, Astrid; Wong, Willy; Eizenman, Moshe

    2013-01-01

    Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either…

  5. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  6. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  7. Memory and visual search in naturalistic 2D and 3D environments.

    PubMed

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  8. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  9. Yogi the rock - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Yogi, a rock taller than rover Sojourner, is the subject of this image, taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The soil in the foreground has been the location of multiple soil mechanics experiments performed by Sojourner's cleated wheels. Pathfinder scientists were able to control the force inflicted on the soil beneath the rover's wheels, giving them insight into the soil's mechanical properties. The soil mechanics experiments were conducted after this image was taken.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  10. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  11. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  12. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  13. Gaze interaction in UAS video exploitation

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Brüstle, Stefan; Heinze, Norbert; Peinsipp-Byma, Elisabeth

    2013-05-01

    A frequently occurring interaction task in UAS video exploitation is the marking or selection of objects of interest in the video. If an object of interest is visually detected by the image analyst, its selection/marking for further exploitation, documentation and communication with the team is a necessary task. Today object selection is usually performed by mouse interaction. As due to sensor motion all objects in the video move, object selection can be rather challenging, especially if strong and fast and ego-motions are present, e.g., with small airborne sensor platforms. In addition to that, objects of interest are sometimes too shortly visible to be selected by the analyst using mouse interaction. To address this issue we propose an eye tracker as input device for object selection. As the eye tracker continuously provides the gaze position of the analyst on the monitor, it is intuitive to use the gaze position for pointing at an object. The selection is then actuated by pressing a button. We integrated this gaze-based "gaze + key press" object selection into Fraunhofer IOSB's exploitation station ABUL using a Tobii X60 eye tracker and a standard keyboard for the button press. Representing the object selections in a spatial relational database, ABUL enables the image analyst to efficiently query the video data in a post processing step for selected objects of interest with respect to their geographical and other properties. An experimental evaluation is presented, comparing gaze-based interaction with mouse interaction in the context of object selection in UAS videos.

  14. IFSAR processing for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2005-05-01

    In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.

  15. 3-D Cavern Enlargement Analyses

    SciTech Connect

    EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.

    2002-03-01

    Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.

  16. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  17. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  18. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  19. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  20. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  1. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  2. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-01

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging. PMID:25836861

  3. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  4. Working memory load disrupts gaze-cued orienting of attention

    PubMed Central

    Bobak, Anna K.; Langton, Stephen R. H.

    2015-01-01

    A large body of work has shown that a perceived gaze shift produces a shift in a viewer’s spatial attention in the direction of the seen gaze. A controversial issue surrounds the extent to which this gaze-cued orienting effect is stimulus-driven, or is under a degree of top-down control. In two experiments we show that the gaze-cued orienting effect is disrupted by a concurrent task that has been shown to place high demands on executive resources: random number generation (RNG). In Experiment 1 participants were faster to locate targets that appeared in gaze-cued locations relative to targets that appeared in locations opposite to those indicated by the gaze shifts, while simultaneously and continuously reciting aloud the digits 1–9 in order; however, this gaze-cueing effect was eliminated when participants continuously recited the same digits in a random order. RNG was also found to interfere with gaze-cued orienting in Experiment 2 where participants performed a speeded letter identification response. Together, these data suggest that gaze-cued orienting is actually under top-down control. We argue that top-down signals sustain a goal to shift attention in response to gazes, such that orienting ordinarily occurs when they are perceived; however, the goal cannot always be maintained when concurrent, multiple, competing goals are simultaneously active in working memory. PMID:26379587

  5. Gaze fixation improves the stability of expert juggling.

    PubMed

    Dessing, Joost C; Rey, Frédéric P; Beek, Peter J

    2012-02-01

    Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain. PMID:22143871

  6. Working memory load disrupts gaze-cued orienting of attention.

    PubMed

    Bobak, Anna K; Langton, Stephen R H

    2015-01-01

    A large body of work has shown that a perceived gaze shift produces a shift in a viewer's spatial attention in the direction of the seen gaze. A controversial issue surrounds the extent to which this gaze-cued orienting effect is stimulus-driven, or is under a degree of top-down control. In two experiments we show that the gaze-cued orienting effect is disrupted by a concurrent task that has been shown to place high demands on executive resources: random number generation (RNG). In Experiment 1 participants were faster to locate targets that appeared in gaze-cued locations relative to targets that appeared in locations opposite to those indicated by the gaze shifts, while simultaneously and continuously reciting aloud the digits 1-9 in order; however, this gaze-cueing effect was eliminated when participants continuously recited the same digits in a random order. RNG was also found to interfere with gaze-cued orienting in Experiment 2 where participants performed a speeded letter identification response. Together, these data suggest that gaze-cued orienting is actually under top-down control. We argue that top-down signals sustain a goal to shift attention in response to gazes, such that orienting ordinarily occurs when they are perceived; however, the goal cannot always be maintained when concurrent, multiple, competing goals are simultaneously active in working memory. PMID:26379587

  7. Look together: analyzing gaze coordination with epistemic network analysis

    PubMed Central

    Andrist, Sean; Collier, Wesley; Gleicher, Michael; Mutlu, Bilge; Shaffer, David

    2015-01-01

    When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1) properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2) optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3) differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users. PMID:26257677

  8. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  9. The Esri 3D city information model

    NASA Astrophysics Data System (ADS)

    Reitz, T.; Schubiger-Banz, S.

    2014-02-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases.

  10. Multigrid calculations of 3-D turbulent viscous flows

    NASA Technical Reports Server (NTRS)

    Yokota, Jeffrey W.

    1989-01-01

    Convergence properties of a multigrid algorithm, developed to calculate compressible viscous flows, are analyzed by a vector sequence eigenvalue estimate. The full 3-D Reynolds-averaged Navier-Stokes equations are integrated by an implicit multigrid scheme while a k-epsilon turbulence model is solved, uncoupled from the flow equations. Estimates of the eigenvalue structure for both single and multigrid calculations are compared in an attempt to analyze the process as well as the results of the multigrid technique. The flow through an annular turbine is used to illustrate the scheme's ability to calculate complex 3-D flows.

  11. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  12. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  13. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  14. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  15. Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view

    NASA Astrophysics Data System (ADS)

    Lee, Won Oh; Heo, Hwan; Lee, Eui Chul; Park, Kang Ryoung

    2013-07-01

    Recently, it has become necessary to evaluate the performance of display devices in terms of human factors. To meet this requirement, several studies have been conducted to measure the eyestrain of users watching display devices. However, these studies were limited in that they did not consider precise human visual information. Therefore, a new eyestrain measurement method is proposed that uses a liquid crystal display (LCD) to measure a user's gaze direction and visual field of view. Our study is different in the following four ways. First, a user's gaze position is estimated using an eyeglass-type eye-image capturing device. Second, we propose a new eye foveation model based on a wavelet transform, considering the gaze position and the gaze detection error of a user. Third, three video adjustment factors-variance of hue (VH), edge, and motion information-are extracted from the displayed images in which the eye foveation models are applied. Fourth, the relationship between eyestrain and three video adjustment factors is investigated. Experimental results show that the decrement of the VH value in a display induces a decrease in eyestrain. In addition, increased edge and motion components induce a reduction in eyestrain.

  16. From gaze cueing to dual eye-tracking: novel approaches to investigate the neural correlates of gaze in social interaction.

    PubMed

    Pfeiffer, Ulrich J; Vogeley, Kai; Schilbach, Leonhard

    2013-12-01

    Tracking eye-movements provides easy access to cognitive processes involved in visual and sensorimotor processing. More recently, the underlying neural mechanisms have been examined by combining eye-tracking and functional neuroimaging methods. Apart from extracting visual information, gaze also serves important functions in social interactions. As a deictic cue, gaze can be used to direct the attention of another person to an object. Conversely, by following other persons' gaze we gain access to their attentional focus, which is essential for understanding their mental states. Social gaze has therefore been studied extensively to understand the social brain. In this endeavor, gaze has mostly been investigated from an observational perspective using static displays of faces and eyes. However, there is growing consent that observational paradigms are insufficient for an understanding of the neural mechanisms of social gaze behavior, which typically involve active engagement in social interactions. Recent methodological advances have allowed increasing ecological validity by studying gaze in face-to-face encounters in real-time. Such improvements include interactions using virtual agents in gaze-contingent eye-tracking paradigms, live interactions via video feeds, and dual eye-tracking in two-person setups. These novel approaches can be used to analyze brain activity related to social gaze behavior. This review introduces these methodologies and discusses recent findings on the behavioral functions and neural mechanisms of gaze processing in social interaction. PMID:23928088

  17. Pipe3D, a pipeline to analyze Integral Field Spectroscopy Data: I. New fitting philosophy of FIT3D

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosález-Ortega, F. F.; Cano-Dí az, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-04-01

    We present an improved version of FIT3D, a fitting tool for the analysis of the spectroscopic properties of the stellar populations and the ionized gas derived from moderate resolution spectra of galaxies. This tool was developed to analyze integral field spectroscopy data and it is the basis of Pipe3D, a pipeline used in the analysis of CALIFA, MaNGA, and SAMI data. We describe the philosophy and each step of the fitting procedure. We present an extensive set of simulations in order to estimate the precision and accuracy of the derived parameters for the stellar populations and the ionized gas. We report on the results of those simulations. Finally, we compare the results of the analysis using FIT3D with those provided by other widely used packages, and we find that the parameters derived by FIT3D are fully compatible with those derived using these other tools.

  18. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  19. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  20. Gaze perception in social anxiety and social anxiety disorder

    PubMed Central

    Schulze, Lars; Renneberg, Babette; Lobmaier, Janek S.

    2013-01-01

    Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed. PMID:24379776

  1. 3D kinematics using dual quaternions: theory and applications in neuroscience

    PubMed Central

    Leclercq, Guillaume; Lefèvre, Philippe; Blohm, Gunnar

    2013-01-01

    In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications. PMID:23443667

  2. 3D Inverse problem: Seawater intrusions

    NASA Astrophysics Data System (ADS)

    Steklova, K.; Haber, E.

    2013-12-01

    Modeling of seawater intrusions (SWI) is challenging as it involves solving the governing equations for variable density flow, multiple time scales and varying boundary conditions. Due to the nonlinearity of the equations and the large aquifer domains, 3D computations are a costly process, particularly when solving the inverse SWI problem. In addition the heads and concentration measurements are difficult to obtain due to mixing, saline wedge location is sensitive to aquifer topography, and there is general uncertainty in initial and boundary conditions and parameters. Some of these complications can be overcome by using indirect geophysical data next to standard groundwater measurements, however, the inverse problem is usually simplified, e.g. by zonation for the parameters based on geological information, steady state substitution of the unknown initial conditions, decoupling the equations or reducing the amount of unknown parameters by covariance analysis. In our work we present a discretization of the flow and solute mass balance equations for variable groundwater (GW) flow. A finite difference scheme is to solve pressure equation and a Semi - Lagrangian method for solute transport equation. In this way we are able to choose an arbitrarily large time step without losing stability up to an accuracy requirement coming from the coupled character of the variable density flow equations. We derive analytical sensitivities of the GW model for parameters related to the porous media properties and also the initial solute distribution. Analytically derived sensitivities reduce the computational cost of inverse problem, but also give insight for maximizing information in collected data. If the geophysical data are available it also enables simultaneous calibration in a coupled hydrogeophysical framework. The 3D inverse problem was tested on artificial time dependent data for pressure and solute content coming from a GW forward model and/or geophysical forward model. Two

  3. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  4. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  5. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  6. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  7. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  8. Depth discrimination from occlusions in 3D clutter.

    PubMed

    Langer, Michael S; Zheng, Haomin; Rezvankhah, Shayan

    2016-09-01

    Objects such as trees, shrubs, and tall grass consist of thousands of small surfaces that are distributed over a three-dimensional (3D) volume. To perceive the depth of surfaces within 3D clutter, a visual system can use binocular stereo and motion parallax. However, such parallax cues are less reliable in 3D clutter because surfaces tend to be partly occluded. Occlusions provide depth information, but it is unknown whether visual systems use occlusion cues to aid depth perception in 3D clutter, as previous studies have addressed occlusions for simple scene geometries only. Here, we present a set of depth discrimination experiments that examine depth from occlusion cues in 3D clutter, and how these cues interact with stereo and motion parallax. We identify two probabilistic occlusion cues. The first is based on the fraction of an object that is visible. The second is based on the depth range of the occluders. We show that human observers use both of these occlusion cues. We also define ideal observers that are based on these occlusion cues. Human observer performance is close to ideal using the visibility cue but far from ideal using the range cue. A key reason for the latter is that the range cue depends on depth estimation of the clutter itself which is unreliable. Our results provide new fundamental constraints on the depth information that is available from occlusions in 3D clutter, and how the occlusion cues are combined with binocular stereo and motion parallax cues. PMID:27618514

  9. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  10. Three-dimensional eye-head coordination during gaze saccades in the primate.

    PubMed

    Crawford, J D; Ceylan, M Z; Klier, E M; Guitton, D

    1999-04-01

    The purpose of this investigation was to describe the neural constraints on three-dimensional (3-D) orientations of the eye in space (Es), head in space (Hs), and eye in head (Eh) during visual fixations in the monkey and the control strategies used to implement these constraints during head-free gaze saccades. Dual scleral search coil signals were used to compute 3-D orientation quaternions, two-dimensional (2-D) direction vectors, and 3-D angular velocity vectors for both the eye and head in three monkeys during the following visual tasks: radial to/from center, repetitive horizontal, nonrepetitive oblique, random (wide 2-D range), and random with pin-hole goggles. Although 2-D gaze direction (of Es) was controlled more tightly than the contributing 2-D Hs and Eh components, the torsional standard deviation of Es was greater (mean 3.55 degrees ) than Hs (3.10 degrees ), which in turn was greater than Eh (1.87 degrees ) during random fixations. Thus the 3-D Es range appeared to be the byproduct of Hs and Eh constraints, resulting in a pseudoplanar Es range that was twisted (in orthogonal coordinates) like the zero torsion range of Fick coordinates. The Hs fixation range was similarly Fick-like, whereas the Eh fixation range was quasiplanar. The latter Eh range was maintained through exquisite saccade/slow phase coordination, i.e., during each head movement, multiple anticipatory saccades drove the eye torsionally out of the planar range such that subsequent slow phases drove the eye back toward the fixation range. The Fick-like Hs constraint was maintained by the following strategies: first, during purely vertical/horizontal movements, the head rotated about constantly oriented axes that closely resembled physical Fick gimbals, i.e., about head-fixed horizontal axes and space-fixed vertical axes, respectively (although in 1 animal, the latter constraint was relaxed during repetitive horizontal movements, allowing for trajectory optimization). However, during large

  11. 3D Dynamic Echocardiography with a Digitizer

    NASA Astrophysics Data System (ADS)

    Oshiro, Osamu; Matani, Ayumu; Chihara, Kunihiro

    1998-05-01

    In this paper,a three-dimensional (3D) dynamic ultrasound (US) imaging system,where a US brightness-mode (B-mode) imagetriggered with an R-wave of electrocardiogram (ECG)was obtained with an ultrasound diagnostic deviceand the location and orientation of the US probewere simultaneously measured with a 3D digitizer, is described.The obtained B-mode imagewas then projected onto a virtual 3D spacewith the proposed interpolation algorithm using a Gaussian operator.Furthermore, a 3D image was presented on a cathode ray tube (CRT)and stored in virtual reality modeling language (VRML).We performed an experimentto reconstruct a 3D heart image in systole using this system.The experimental results indicatethat the system enables the visualization ofthe 3D and internal structure of a heart viewed from any angleand has potential for use in dynamic imaging,intraoperative ultrasonography and tele-medicine.

  12. 3D Stratigraphic Modeling of Central Aachen

    NASA Astrophysics Data System (ADS)

    Dong, M.; Neukum, C.; Azzam, R.; Hu, H.

    2010-05-01

    , -y, -z coordinates, down-hole depth, and stratigraphic information are available. 4) We grouped stratigraphic units into four main layers based on analysis of geological settings of the modeling area. The stratigraphic units extend from Quaternary, Cretaceous, Carboniferous to Devonian. In order to facilitate the determination of each unit boundaries, a series of standard code was used to integrate data with different descriptive attributes. 5) The Quaternary and Cretaceous units are characterized by subhorizontal layers. Kriging interpolation was processed to the borehole data in order to estimate data distribution and surface relief for the layers. 6) The Carboniferous and Devonian units are folded. The lack of software support, concerning simulating folds and the shallow depth of boreholes and cross sections constrained the determination of geological boundaries. A strategy of digitalizing the fold surfaces from cross sections and establishing them as inclined strata was followed. The modeling was simply subdivided into two steps. The first step consisted of importing data into the modeling software. The second step involved the construction of subhorizontal layers and folds, which were constrained by geological maps, cross sections and outcrops. The construction of the 3D stratigraphic model is of high relevance to further simulation and application, such as 1) lithological modeling; 2) answering simple questions such as "At which unit is the water table?" and calculating volume of groundwater storage during assessment of aquifer vulnerability to contamination; and 3) assigned by geotechnical properties in grids and providing them for user required application. Acknowledgements: Borehole data is kindly provided by the Municipality of Aachen. References: 1. Janet T. Watt, Jonathan M.G. Glen, David A. John and David A. Ponce (2007) Three-dimensional geologic model of the northern Nevada rift and the Beowawe geothermal system, north-central Nevada. Geosphere, v. 3

  13. Gis-Based Smart Cartography Using 3d Modeling

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  14. Following Gaze: Gaze-Following Behavior as a Window into Social Cognition

    PubMed Central

    Shepherd, Stephen V.

    2010-01-01

    In general, individuals look where they attend and next intend to act. Many animals, including our own species, use observed gaze as a deictic (“pointing”) cue to guide behavior. Among humans, these responses are reflexive and pervasive: they arise within a fraction of a second, act independently of task relevance, and appear to undergird our initial development of language and theory of mind. Human and nonhuman animals appear to share basic gaze-following behaviors, suggesting the foundations of human social cognition may also be present in nonhuman brains. PMID:20428494

  15. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  16. Ultrafine particle emissions from desktop 3D printers

    NASA Astrophysics Data System (ADS)

    Stephens, Brent; Azimi, Parham; El Orch, Zeineb; Ramos, Tiffanie

    2013-11-01

    The development of low-cost desktop versions of three-dimensional (3D) printers has made these devices widely accessible for rapid prototyping and small-scale manufacturing in home and office settings. Many desktop 3D printers rely on heated thermoplastic extrusion and deposition, which is a process that has been shown to have significant aerosol emissions in industrial environments. However, we are not aware of any data on particle emissions from commercially available desktop 3D printers. Therefore, we report on measurements of size-resolved and total ultrafine particle (UFP) concentrations resulting from the operation of two types of commercially available desktop 3D printers inside a commercial office space. We also estimate size-resolved (11.5 nm-116 nm) and total UFP (<100 nm) emission rates and compare them to emission rates from other desktop devices and indoor activities known to emit fine and ultrafine particles. Estimates of emission rates of total UFPs were large, ranging from ˜2.0 × 1010 # min-1 for a 3D printer utilizing a polylactic acid (PLA) feedstock to ˜1.9 × 1011 # min-1 for the same type of 3D printer utilizing a higher temperature acrylonitrile butadiene styrene (ABS) thermoplastic feedstock. Because most of these devices are currently sold as standalone devices without any exhaust ventilation or filtration accessories, results herein suggest caution should be used when operating in inadequately ventilated or unfiltered indoor environments. Additionally, these results suggest that more controlled experiments should be conducted to more fundamentally evaluate particle emissions from a wider arrange of desktop 3D printers.

  17. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  18. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  19. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  20. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  1. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  2. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  3. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  4. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  5. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  6. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  7. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  8. Training for eye contact modulates gaze following in dogs

    PubMed Central

    Wallis, Lisa J.; Range, Friederike; Müller, Corsin A.; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-01-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task. PMID:26257403

  9. Vehicle teleoperation using 3D maps and GPS time synchronization.

    PubMed

    Suzuki, Taro; Amano, Yoshiharu; Hashizume, Takumi; Kubo, Nobuaki

    2013-01-01

    In conventional vehicle teleoperation systems, using low-bandwidth, high-delay transmission links causes a serious problem for remote control of the vehicles. To solve this problem, a proposed teleoperation system employs 3D maps and GPS time synchronization. Two GPS receivers measure the transmission delay, which the system uses to estimate the vehicle's location and orientation. Field experiments show that the 3D-map-based interface lets users easily comprehend the remote environment while navigating a vehicle. The experiments also show that taking communication delays into account improves maneuverability. PMID:24808084

  10. Regularity criterion for the 3D Hall-magneto-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dai, Mimi

    2016-07-01

    This paper studies the regularity problem for the 3D incompressible resistive viscous Hall-magneto-hydrodynamic (Hall-MHD) system. The Kolmogorov 41 phenomenological theory of turbulence [14] predicts that there exists a critical wavenumber above which the high frequency part is dominated by the dissipation term in the fluid equation. Inspired by this idea, we apply an approach of splitting the wavenumber combined with an estimate of the energy flux to obtain a new regularity criterion. The regularity condition presented here is weaker than conditions in the existing criteria (Prodi-Serrin type criteria) for the 3D Hall-MHD system.

  11. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary. PMID:22745004

  12. 3-D seismology in the Arabian Gulf

    SciTech Connect

    Al-Husseini, M.; Chimblo, R.

    1995-08-01

    Since 1977 when Aramco and GSI (Geophysical Services International) pioneered the first 3-D seismic survey in the Arabian Gulf, under the guidance of Aramco`s Chief Geophysicist John Hoke, 3-D seismology has been effectively used to map many complex subsurface geological phenomena. By the mid-1990s extensive 3-D surveys were acquired in Abu Dhabi, Oman, Qatar and Saudi Arabia. Also in the mid-1990`s Bahrain, Kuwait and Dubai were preparing to record surveys over their fields. On the structural side 3-D has refined seismic maps, focused faults and fractures systems, as well as outlined the distribution of facies, porosity and fluid saturation. In field development, 3D has not only reduced drilling costs significantly, but has also improved the understanding of fluid behavior in the reservoir. In Oman, Petroleum Development Oman (PDO) has now acquired the first Gulf 4-D seismic survey (time-lapse 3D survey) over the Yibal Field. The 4-D survey will allow PDO to directly monitor water encroachment in the highly-faulted Cretaceous Shu`aiba reservoir. In exploration, 3-D seismology has resolved complex prospects with structural and stratigraphic complications and reduced the risk in the selection of drilling locations. The many case studies from Saudi Arabia, Oman, Qatar and the United Arab Emirates, which are reviewed in this paper, attest to the effectiveness of 3D seismology in exploration and producing, in clastics and carbonates reservoirs, and in the Mesozoic and Paleozoic.

  13. A 3D Geostatistical Mapping Tool

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  14. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  15. Stereoscopic Investigations of 3D Coulomb Balls

    SciTech Connect

    Kaeding, Sebastian; Melzer, Andre; Arp, Oliver; Block, Dietmar; Piel, Alexander

    2005-10-31

    In dusty plasmas particles are arranged due to the influence of external forces and the Coulomb interaction. Recently Arp et al. were able to generate 3D spherical dust clouds, so-called Coulomb balls. Here, we present measurements that reveal the full 3D particle trajectories from stereoscopic imaging.

  16. 3-D structures of planetary nebulae

    NASA Astrophysics Data System (ADS)

    Steffen, W.

    2016-07-01

    Recent advances in the 3-D reconstruction of planetary nebulae are reviewed. We include not only results for 3-D reconstructions, but also the current techniques in terms of general methods and software. In order to obtain more accurate reconstructions, we suggest to extend the widely used assumption of homologous nebula expansion to map spectroscopically measured velocity to position along the line of sight.

  17. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  18. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3