Science.gov

Sample records for 3d point spread

  1. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  2. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  3. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  4. Isotropic 3D Super-resolution Imaging with a Self-bending Point Spread Function

    PubMed Central

    Jia, Shu; Vaughan, Joshua C.; Zhuang, Xiaowei

    2014-01-01

    Airy beams maintain their intensity profiles over a large propagation distance without substantial diffraction and exhibit lateral bending during propagation1-5. This unique property has been exploited for micromanipulation of particles6, generation of plasma channels7 and guidance of plasmonic waves8, but has not been explored for high-resolution optical microscopy. Here, we introduce a self-bending point spread function (SB-PSF) based on Airy beams for three-dimensional (3D) super-resolution fluorescence imaging. We designed a side-lobe-free SB-PSF and implemented a two-channel detection scheme to enable unambiguous 3D localization of fluorescent molecules. The lack of diffraction and the propagation-dependent lateral bending make the SB-PSF well suited for precise 3D localization of molecules over a large imaging depth. Using this method, we obtained super-resolution imaging with isotropic 3D localization precision of 10-15 nm over a 3 μm imaging depth from ∼2000 photons per localization. PMID:25383090

  5. 3D imaging with the light sword optical element and deconvolution of distance-dependent point spread functions

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Petelczyc, Krzysztof; Kolodziejczyk, Andrzej; Jaroszewicz, Zbigniew; Ducin, Izabela; Kakarenko, Karol; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Sypek, Maciej; Wojnowski, Dariusz

    2010-12-01

    The experimental demonstration of a blind deconvolution method on an imaging system with a Light Sword optical element (LSOE) used instead of a lens. Try-and-error deconvolution of known Point Spread Functions (PSF) from an input image captured on a single CCD camera is done. By establishing the optimal PSF providing the optimal contrast of optotypes seen in a frame, one can know the defocus parameter and hence the object distance. Therefore with a single exposure on a standard CCD camera we gain information on the depth of a 3-D scene. Exemplary results for a simple scene containing three optotypes at three distances from the imaging element are presented.

  6. The double-helix point spread function enables precise and accurate measurement of 3D single-molecule localization and orientation

    PubMed Central

    Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.

    2014-01-01

    Single-molecule-based super-resolution fluorescence microscopy has recently been developed to surpass the diffraction limit by roughly an order of magnitude. These methods depend on the ability to precisely and accurately measure the position of a single-molecule emitter, typically by fitting its emission pattern to a symmetric estimator (e.g. centroid or 2D Gaussian). However, single-molecule emission patterns are not isotropic, and depend highly on the orientation of the molecule’s transition dipole moment, as well as its z-position. Failure to account for this fact can result in localization errors on the order of tens of nm for in-focus images, and ~50–200 nm for molecules at modest defocus. The latter range becomes especially important for three-dimensional (3D) single-molecule super-resolution techniques, which typically employ depths-of-field of up to ~2 μm. To address this issue we report the simultaneous measurement of precise and accurate 3D single-molecule position and 3D dipole orientation using the Double-Helix Point Spread Function (DH-PSF) microscope. We are thus able to significantly improve dipole-induced position errors, reducing standard deviations in lateral localization from ~2x worse than photon-limited precision (48 nm vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation we are able to improve from a lateral standard deviation of 116 nm (~4x worse than the precision, 28 nm) to 34 nm (within 6 nm). PMID:24817798

  7. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  8. Vector quantization of 3-D point clouds

    NASA Astrophysics Data System (ADS)

    Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.

  9. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  10. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  11. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  12. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  13. 3D Building Reconstruction Using Dense Photogrammetric Point Cloud

    NASA Astrophysics Data System (ADS)

    Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.

    2016-06-01

    Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.

  14. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  15. Iterative closest normal point for 3D face recognition.

    PubMed

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database. PMID:22585097

  16. Automated Identification of Fiducial Points on 3D Torso Images

    PubMed Central

    Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2013-01-01

    Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

  17. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  18. Compression of point-texture 3D motion sequences

    NASA Astrophysics Data System (ADS)

    Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.

  19. Interpolating point spread function anisotropy

    NASA Astrophysics Data System (ADS)

    Gentile, M.; Courbin, F.; Meylan, G.

    2013-01-01

    Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). In all our methods we model the PSF using a single Moffat profile and we interpolate the fitted parameters at a set of required positions. This allowed us to win the Star-challenge of GREAT10, with the B-splines method. However, we also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σ2sys better than the 1

  20. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  1. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  2. Comparison of 3D interest point detectors and descriptors for point cloud fusion

    NASA Astrophysics Data System (ADS)

    Hänsch, R.; Weber, T.; Hellwich, O.

    2014-08-01

    The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.

  3. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  4. Feature-Based Quality Evaluation of 3d Point Clouds - Study of the Performance of 3d Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Ridene, T.; Goulette, F.; Chendeb, S.

    2013-08-01

    The production of realistic 3D map databases is continuously growing. We studied an approach of 3D mapping database producing based on the fusion of heterogeneous 3D data. In this term, a rigid registration process was performed. Before starting the modeling process, we need to validate the quality of the registration results, and this is one of the most difficult and open research problems. In this paper, we suggest a new method of evaluation of 3D point clouds based on feature extraction and comparison with a 2D reference model. This method is based on tow metrics: binary and fuzzy.

  5. PEG-diacrylate/hyaluronic acid semi-interpenetrating network compositions for 3D cell spreading and migration

    PubMed Central

    Lee, Ho-Joon; Sen, Atanu; Bae, Sooneon; Lee, Jeoung Soo; Webb, Ken

    2015-01-01

    To serve as artificial matrices for therapeutic cell transplantation, synthetic hydrogels must incorporate mechanisms enabling localized, cell-mediated degradation that allows cell spreading and migration. Previously, we have shown that hybrid semi-interpenetrating polymer networks (semi-IPNs) composed of hydrolytically degradable PEG-diacrylates (PEGdA), acrylate-PEG-GRGDS, and native hyaluronic acid (HA) support increased cell spreading relative to fully synthetic networks that is dependent on cellular hyaluronidase activity. This study systematically investigated the effects of PEGdA/HA semi-IPN network composition on 3D spreading of encapsulated fibroblasts, the underlying changes in gel structure responsible for this activity, and the ability of optimized gel formulations to support long-term cell survival and migration. Fibroblast spreading exhibited a biphasic response to HA concentration, required a minimum HA molecular weight, decreased with increasing PEGdA concentration, and was independent of hydrolytic degradation at early time points. Increased gel turbidity was observed in semi-IPNs, but not in copolymerized hydrogels containing methacrylated HA that did not support cell spreading; suggesting an underlying mechanism of polymerization-induced phase separation resulting in HA-enriched defects within the network structure. PEGdA/HA semi-IPNs were also able to support cell spreading at relatively high levels of mechanical properties (~10 kPa elastic modulus) compared to alternative hybrid hydrogels. In order to support long-term cellular remodeling, the degradation rate of the PEGdA component was optimized by preparing blends of three different PEGdA macromers with varying susceptibility to hydrolytic degradation. Optimized semi-IPN formulations supported long-term survival of encapsulated fibroblasts and sustained migration in a gel-within-gel encapsulation model. These results demonstrate that PEGdA/HA semi-IPNs provide dynamic microenvironments that

  6. Individual versus Collective Fibroblast Spreading and Migration: Regulation by Matrix Composition in 3-D Culture

    PubMed Central

    Miron-Mendoza, Miguel; Lin, Xihui; Ma, Lisha; Ririe, Peter; Petroll, W. Matthew

    2012-01-01

    Extracellular matrix (ECM) supplies both physical and chemical signals to cells and provides a substrate through which fibroblasts migrate during wound repair. To directly assess how ECM composition regulates this process, we used a nested 3D matrix model in which cell-populated collagen buttons were embedded in cell-free collagen or fibrin matrices. Time-lapse microscopy was used to record the dynamic pattern of cell migration into the outer matrices, and 3-D confocal imaging was used to assess cell connectivity and cytoskeletal organization. Corneal fibroblasts stimulated with PDGF migrated more rapidly into collagen as compared to fibrin. In addition, the pattern of fibroblast migration into fibrin and collagen ECMs was strikingly different. Corneal fibroblasts migrating into collagen matrices developed dendritic processes and moved independently, whereas cells migrating into fibrin matrices had a more fusiform morphology and formed an interconnected meshwork. A similar pattern was observed when using dermal fibroblasts, suggesting that this response in not unique to corneal cells. We next cultured corneal fibroblasts within and on top of standard collagen and fibrin matrices to assess the impact of ECM composition on the cell spreading response. Similar differences in cell morphology and connectivity were observed – cells remained separated on collagen but coalesced into clusters on fibrin. Cadherin was localized to junctions between interconnected cells, whereas fibronectin was present both between cells and at the tips of extending cell processes. Cells on fibrin matrices also developed more prominent stress fibers than those on collagen matrices. Importantly, these spreading and migration patterns were consistently observed on both rigid and compliant substrates, thus differences in ECM mechanical stiffness were not the underlying cause. Overall, these results demonstrate for the first time that ECM protein composition alone (collagen vs. fibrin) can

  7. Dynamic Assessment of Fibroblast Mechanical Activity during Rac-induced Cell Spreading in 3-D Culture

    PubMed Central

    Petroll, W. Matthew; Ma, Lisha; Kim, Areum; Ly, Linda; Vishwanath, Mridula

    2009-01-01

    The goal of this study was to determine the morphological and sub-cellular mechanical effects of Rac activation on fibroblasts within 3-D collagen matrices. Corneal fibroblasts were plated at low density inside 100 μm thick fibrillar collagen matrices and cultured for 1 to 2 days in serum-free media. Time-lapse imaging was then performed using Nomarski DIC. After an acclimation period, perfusion was switched to media containing PDGF. In some experiments, Y-27632 or blebbistatin were used to inhibit Rho-kinase (ROCK) or myosin II, respectively. PDGF activated Rac and induced cell spreading, which resulted in an increase in cell length, cell area, and the number of pseudopodial processes. Tractional forces were generated by extending pseudopodia, as indicated by centripetal displacement and realignment of collagen fibrils. Interestingly, the pattern of pseudopodial extension and local collagen fibril realignment was highly dependent upon the initial orientation of fibrils at the leading edge. Following ROCK or myosin II inhibition, significant ECM relaxation was observed, but small displacements of collagen fibrils continued to be detected at the tips of pseudopodia. Taken together, the data suggests that during Rac-induced cell spreading within 3-D matrices, there is a shift in the distribution of forces from the center to the periphery of corneal fibroblasts. ROCK mediates the generation of large myosin II-based tractional forces during cell spreading within 3-D collagen matrices, however residual forces can be generated at the tips of extending pseudopodia that are both ROCK and myosin II-independent. PMID:18452153

  8. Sensitivity of power and RMS delay spread predictions of a 3D indoor ray tracing model.

    PubMed

    Liu, Zhong-Yu; Guo, Li-Xin; Li, Chang-Long; Wang, Qiang; Zhao, Zhen-Wei

    2016-06-13

    This study investigates the sensitivity of a three-dimensional (3D) indoor ray tracing (RT) model for the use of the uniform theory of diffraction and geometrical optics in radio channel characterizations of indoor environments. Under complex indoor environments, RT-based predictions require detailed and accurate databases of indoor object layouts and the electrical characteristics of such environments. The aim of this study is to assist in selecting the appropriate level of accuracy required in indoor databases to achieve good trade-offs between database costs and prediction accuracy. This study focuses on the effects of errors in indoor environments on prediction results. In studying the effects of inaccuracies in geometry information (indoor object layout) on power coverage prediction, two types of artificial erroneous indoor maps are used. Moreover, a systematic analysis is performed by comparing the predictions with erroneous indoor maps and those with the original indoor map. Subsequently, the influence of random errors on RMS delay spread results is investigated. Given the effect of electrical parameters on the accuracy of the predicted results of the 3D RT model, the relative permittivity and conductivity of different fractions of an indoor environment are set with different values. Five types of computer simulations are considered, and for each type, the received power and RMS delay spread under the same circumstances are simulated with the RT model. PMID:27410335

  9. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  10. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  11. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  12. Unlocking the scientific potential of complex 3D point cloud dataset : new classification and 3D comparison methods

    NASA Astrophysics Data System (ADS)

    Lague, D.; Brodu, N.; Leroux, J.

    2012-12-01

    Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi

  13. Filtering method for 3D laser scanning point cloud

    NASA Astrophysics Data System (ADS)

    Liu, Da; Wang, Li; Hao, Yuncai; Zhang, Jun

    2015-10-01

    In recent years, with the rapid development of the hardware and software of the three-dimensional model acquisition, three-dimensional laser scanning technology is utilized in various aspects, especially in space exploration. The point cloud filter is very important before using the data. In the paper, considering both the processing quality and computing speed, an improved mean-shift point cloud filter method is proposed. Firstly, by analyze the relevance of the normal vector between the upcoming processing point and the near points, the iterative neighborhood of the mean-shift is selected dynamically, then the high frequency noise is constrained. Secondly, considering the normal vector of the processing point, the normal vector is updated. Finally, updated position is calculated for each point, then each point is moved in the normal vector according to the updated position. The experimental results show that the large features are retained, at the same time, the small sharp features are also existed for different size and shape of objects, so the target feature information is protected precisely. The computational complexity of the proposed method is not high, it can bring high precision results with fast speed, so it is very suitable for space application. It can also be utilized in civil, such as large object measurement, industrial measurement, car navigation etc. In the future, filter with the help of point strength will be further exploited.

  14. Numerical 3D models support two distinct hydrothermal circulation systems at fast spreading ridges

    NASA Astrophysics Data System (ADS)

    Hasenclever, Jörg; Theissen-Krah, Sonja; Rüpke, Lars

    2013-04-01

    We present 3D numerical calculations of hydrothermal fluid flow at fast spreading ridges. The setup of the 3D models is based our previous 2D studies, in which we have coupled numerical models for crustal accretion and hydrothermal fluid flow. One result of these calculations is a crustal permeability field that leads to a thermal structure in the crust that matches seismic tomography data of the East Pacific Rise (EPR). The 1000°C isotherm obtained from the 2D results is now used as the lower boundary of the 3D model domain, while the upper boundary is a smoothed bathymetry of the EPR. The same permeability field as in the 2D models is used, with the highest permeability at the ridge axis and a decrease with both depth and distance to the ridge. Permeability is also reduced linearly between 600 and 1000°C. Using a newly developed parallel finite element code written in Matlab that solves for thermal evolution, fluid pressure and Darcy flow, we simulate the flow patterns of hydrothermal circulation in a segment of 5000m along-axis, 10000m across-axis and up to 5000m depth. We observe two distinct hydrothermal circulation systems: An on-axis system forming a series of vents with a spacing ranging from 100 to 500m that is recharged by nearby (100-200m) downflows on both sides of the ridge axis. Simultaneously a second system with much broader extensions both laterally and vertically exists off-axis. It is recharged by fluids intruding between 1500m to 5000m off-axis and sampling both upper and lower crust. These fluids are channeled in the deepest and hottest regions with high permeability and migrate up-slope following the 600°C isotherm until reaching the edge of the melt lens. Depending on the width of the melt lens these off-axis fluids either merge with the on-axis hydrothermal system or form separate vents. We observe separate off-axis vent fields if the magma lens half-width exceeds 1000m and confluence of both systems for half-widths smaller than 500m. For

  15. Point spread function engineering with multiphoton SPIFI

    NASA Astrophysics Data System (ADS)

    Wernsing, Keith A.; Field, Jeffrey J.; Domingue, Scott R.; Allende-Motz, Alyssa M.; DeLuca, Keith F.; Levi, Dean H.; DeLuca, Jennifer G.; Young, Michael D.; Squier, Jeff A.; Bartels, Randy A.

    2016-03-01

    MultiPhoton SPatIal Frequency modulated Imaging (MP-SPIFI) has recently demonstrated the ability to simultaneously obtain super-resolved images in both coherent and incoherent scattering processes -- namely, second harmonic generation and two-photon fluorescence, respectively.1 In our previous analysis, we considered image formation produced by the zero and first diffracted orders from the SPIFI modulator. However, the modulator is a binary amplitude mask, and therefore produces multiple diffracted orders. In this work, we extend our analysis to image formation in the presence of higher diffracted orders. We find that tuning the mask duty cycle offers a measure of control over the shape of super-resolved point spread functions in an MP-SPIFI microscope.

  16. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    NASA Astrophysics Data System (ADS)

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  17. Deep Herschel PACS point spread functions

    NASA Astrophysics Data System (ADS)

    Bocchio, M.; Bianchi, S.; Abergel, A.

    2016-06-01

    The knowledge of the point spread function (PSF) of imaging instruments represents a fundamental requirement for astronomical observations. The Herschel PACS PSFs delivered by the instrument control centre are obtained from observations of the Vesta asteroid, which provides a characterisation of the central part and, therefore, excludes fainter features. In many cases, however, information on both the core and wings of the PSFs is needed. With this aim, we combine Vesta and Mars dedicated observations and obtain PACS PSFs with an unprecedented dynamic range (~106) at slow and fast scan speeds for the three photometric bands. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.FITS files of our PACS PSFs (Fig. 2) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A117

  18. Novel volumetric 3D display based on point light source optical reconstruction using multi focal lens array

    NASA Astrophysics Data System (ADS)

    Lee, Jin su; Lee, Mu young; Kim, Jun oh; Kim, Cheol joong; Won, Yong Hyub

    2015-03-01

    Generally, volumetric 3D display panel produce volume-filling three dimensional images. This paper discusses a volumetric 3D display based on periodical point light sources(PLSs) construction using a multi focal lens array(MFLA). The voxel of discrete 3D images is formed in the air via construction of point light source emitted by multi focal lens array. This system consists of a parallel beam, a spatial light modulator(SLM), a lens array, and a polarizing filter. The multi focal lens array is made with UV adhesive polymer droplet control using a dispersing machine. The MFLA consists of 20x20 circular lens array. Each lens aperture of the MFLA shows 300um on average. The polarizing filter is placed after the SLM and the MFLA to set in phase mostly mode. By the point spread function, the PLSs of the system are located by the focal length of each lens of the MFLA. It can also provide the moving parallax and relatively high resolution. However it has a limit of viewing angle and crosstalk by a property of each lens. In our experiment, we present the letter `C', `O', `DE' and ball's surface with the different depth location. It could be seen clearly that when CCD camera is moved to its position following as transverse axis of the display system. From our result, we expect that varifocal lens like EWOD and LC-lens can be applied for real time volumetric 3D display system.

  19. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  20. Bacteria Experiment May Point Way to Slow Zika's Spread

    MedlinePlus

    ... html Bacteria Experiment May Point Way to Slow Zika's Spread Infecting mosquitoes led to lower, inactive levels ... bacteria may help curb the spread of the Zika virus. The researchers got the idea after a ...

  1. Bacteria Experiment May Point Way to Slow Zika's Spread

    MedlinePlus

    ... nlm.nih.gov/medlineplus/news/fullstory_158661.html Bacteria Experiment May Point Way to Slow Zika's Spread ... 2016 (HealthDay News) -- Experiments in mosquitoes suggest that bacteria may help curb the spread of the Zika ...

  2. Fast Probabilistic Fusion of 3d Point Clouds via Occupancy Grids for Scene Classification

    NASA Astrophysics Data System (ADS)

    Kuhn, Andreas; Huang, Hai; Drauschke, Martin; Mayer, Helmut

    2016-06-01

    High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.

  3. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  4. Edge features extraction from 3D laser point cloud based on corresponding images

    NASA Astrophysics Data System (ADS)

    Li, Xin-feng; Zhao, Zi-ming; Xu, Guo-qing; Geng, Yan-long

    2013-09-01

    An extraction method of edge features from 3D laser point cloud based on corresponding images was proposed. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image using gray moment algorithm. Then project the sub-pixel edge to the point cloud in fitting scan-lines. At last the edge features were achieved by linking the crossing points. The experimental results demonstrate that the method guarantees accurate fine extraction.

  5. 3D campus modeling using LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Yoshii, Satoshi; Funatsu, Yukihiro; Takemata, Kazuya

    2012-10-01

    The importance of having a 3D urban city model is recognized in many applications, such as management offices of risk and disaster, the offices for city planning and developing and others. As an example of urban model, we reconstructed 3D KIT campus manually in this study, by utilizing airborne LiDAR point cloud data. The automatic extraction of building shapes was left in future work.

  6. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  7. Nonrigid point registration for 2D curves and 3D surfaces and its various applications

    NASA Astrophysics Data System (ADS)

    Wang, Hesheng; Fei, Baowei

    2013-06-01

    A nonrigid B-spline-based point-matching (BPM) method is proposed to match dense surface points. The method solves both the point correspondence and nonrigid transformation without features extraction. The registration method integrates a motion model, which combines a global transformation and a B-spline-based local deformation, into a robust point-matching framework. The point correspondence and deformable transformation are estimated simultaneously by fuzzy correspondence and by a deterministic annealing technique. Prior information about global translation, rotation and scaling is incorporated into the optimization. A local B-spline motion model decreases the degrees of freedom for optimization and thus enables the registration of a larger number of feature points. The performance of the BPM method has been demonstrated and validated using synthesized 2D and 3D data, mouse MRI and micro-CT images. The proposed BPM method can be used to register feature point sets, 2D curves, 3D surfaces and various image data.

  8. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  9. Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.

  10. Non-Iterative Rigid 2D/3D Point-Set Registration Using Semidefinite Programming

    NASA Astrophysics Data System (ADS)

    Khoo, Yuehaw; Kapoor, Ankur

    2016-07-01

    We describe a convex programming framework for pose estimation in 2D/3D point-set registration with unknown point correspondences. We give two mixed-integer nonlinear program (MINP) formulations of the 2D/3D registration problem when there are multiple 2D images, and propose convex relaxations for both of the MINPs to semidefinite programs (SDP) that can be solved efficiently by interior point methods. Our approach to the 2D/3D registration problem is non-iterative in nature as we jointly solve for pose and correspondence. Furthermore, these convex programs can readily incorporate feature descriptors of points to enhance registration results. We prove that the convex programs exactly recover the solution to the original nonconvex 2D/3D registration problem under noiseless condition. We apply these formulations to the registration of 3D models of coronary vessels to their 2D projections obtained from multiple intra-operative fluoroscopic images. For this application, we experimentally corroborate the exact recovery property in the absence of noise and further demonstrate robustness of the convex programs in the presence of noise.

  11. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  12. High-numerical-aperture microscopy with a rotating point spread function.

    PubMed

    Yu, Zhixian; Prasad, Sudhakar

    2016-07-01

    Rotating point spread function (PSF) microscopy via spiral phase engineering can localize point sources over large focal depths in a snapshot mode. The present work gives an approximate vector-field analysis of an improved rotating PSF design that encodes both the 3D location and polarization state of a monochromatic point dipole emitter for high-numerical-aperture microscopy. By examining the angle of rotation and the spatial form of the PSF, one can jointly localize point sources and determine the polarization state of light emitted by them over a 3D field in a single snapshot. Results of numerical simulations of noisy data frames under Poisson shot noise conditions and the errors in the recovery of 3D location and dipole orientation for a single point source are discussed. PMID:27409707

  13. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  14. Database guided detection of anatomical landmark points in 3D images of the heart

    NASA Astrophysics Data System (ADS)

    Karavides, Thomas; Esther Leung, K. Y.; Paclik, Pavel; Hendriks, Emile A.; Bosch, Johan G.

    2010-03-01

    Automated landmark detection may prove invaluable in the analysis of real-time three-dimensional (3D) echocardiograms. By detecting 3D anatomical landmark points, the standard anatomical views can be extracted automatically in apically acquired 3D ultrasound images of the left ventricle, for better standardization of visualization and objective diagnosis. Furthermore, the landmarks can serve as an initialization for other analysis methods, such as segmentation. The described algorithm applies landmark detection in perpendicular planes of the 3D dataset. The landmark detection exploits a large database of expert annotated images, using an extensive set of Haar features for fast classification. The detection is performed using two cascades of Adaboost classifiers in a coarse to fine scheme. The method is evaluated by measuring the distance of detected and manually indicated landmark points in 25 patients. The method can detect landmarks accurately in the four-chamber (apex: 7.9+/-7.1mm, septal mitral valve point: 5.6+/-2.7mm lateral mitral valve point: 4.0+/-2.6mm) and two-chamber view (apex: 7.1+/-6.7mm, anterior mitral valve point: 5.8+/-3.5mm, inferior mitral valve point: 4.5+/-3.1mm). The results compare well to those reported by others.

  15. Melting points and chemical bonding properties of 3d transition metal elements

    NASA Astrophysics Data System (ADS)

    Takahara, Wataru

    2014-08-01

    The melting points of 3d transition metal elements show an unusual local minimal peak at manganese across Period 4 in the periodic table. The chemical bonding properties of scandium, titanium, vanadium, chromium, manganese, iron, cobalt, nickel and copper are investigated by the DV-Xα cluster method. The melting points are found to correlate with the bond overlap populations. The chemical bonding nature therefore appears to be the primary factor governing the melting points.

  16. 3-D Printers Spread from Engineering Departments to Designs across Disciplines

    ERIC Educational Resources Information Center

    Chen, Angela

    2012-01-01

    The ability to print a 3-D object may sound like science fiction, but it has been around in some form since the 1980s. Also called rapid prototyping or additive manufacturing, the idea is to take a design from a computer file and forge it into an object, often in flat cross-sections that can be assembled into a larger whole. While the printer on…

  17. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  18. Laser point cloud diluting and refined 3D reconstruction fusing with digital images

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zhang, Jianqing

    2007-06-01

    This paper shows a method to combine the imaged-based modeling technique and Laser scanning data to rebuild a realistic 3D model. Firstly use the image pair to build a relative 3D model of the object, and then register the relative model to the Laser coordinate system. Project the Laser points to one of the images and extract the feature lines from that image. After that fit the 2D projected Laser points to lines in the image and constrain their corresponding 3D points to lines in the 3D Laser space to keep the features of the model. Build TIN and cancel the redundant points, which don't impact the curvature of their neighborhood areas. Use the diluting Laser point cloud to reconstruct the geometry model of the object, and then project the texture of corresponding image onto it. The process is shown to be feasible and progressive proved by experimental results. The final model is quite similar with the real object. This method cuts down the quantity of data in the precondition of keeping the features of model. The effect of it is manifest.

  19. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  20. Graph-Based Compression of Dynamic 3D Point Cloud Sequences.

    PubMed

    Thanou, Dorina; Chou, Philip A; Frossard, Pascal

    2016-04-01

    This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way. PMID:26891486

  1. Dense point-cloud creation using superresolution for a monocular 3D reconstruction system

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-05-01

    We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.

  2. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    NASA Astrophysics Data System (ADS)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  3. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2013-10-01

    The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  4. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  5. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar

  6. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  7. 3D multiple-point statistics simulation using 2D training images

    NASA Astrophysics Data System (ADS)

    Comunian, A.; Renard, P.; Straubhaar, J.

    2012-03-01

    One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.

  8. Pre-Processing of Point-Data from Contact and Optical 3D Digitization Sensors

    PubMed Central

    Budak, Igor; Vukelić, Djordje; Bračun, Drago; Hodolič, Janko; Soković, Mirko

    2012-01-01

    Contemporary 3D digitization systems employed by reverse engineering (RE) feature ever-growing scanning speeds with the ability to generate large quantity of points in a unit of time. Although advantageous for the quality and efficiency of RE modelling, the huge number of point datas can turn into a serious practical problem, later on, when the CAD model is generated. In addition, 3D digitization processes are very often plagued by measuring errors, which can be attributed to the very nature of measuring systems, various characteristics of the digitized objects and subjective errors by the operator, which also contribute to problems in the CAD model generation process. This paper presents an integral system for the pre-processing of point data, i.e., filtering, smoothing and reduction, based on a cross-sectional RE approach. In the course of the proposed system development, major emphasis was placed on the module for point data reduction, which was designed according to a novel approach with integrated deviation analysis and fuzzy logic reasoning. The developed system was verified through its application on three case studies, on point data from objects of versatile geometries obtained by contact and laser 3D digitization systems. The obtained results demonstrate the effectiveness of the system. PMID:22368513

  9. Phase-Scrambler Plate Spreads Point Image

    NASA Technical Reports Server (NTRS)

    Edwards, Oliver J.; Arild, Tor

    1992-01-01

    Array of small prisms retrofit to imaging lens. Phase-scrambler plate essentially planar array of small prisms partitioning aperture of lens into many subapertures, and prism at each subaperture designed to divert relatively large diffraction spot formed by that subaperture to different, specific point on focal plane.

  10. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge

    NASA Astrophysics Data System (ADS)

    Gressin, Adrien; Mallet, Clément; Demantké, Jérôme; David, Nicolas

    2013-05-01

    Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.

  11. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  12. Accuracy evaluation of segmentation for high resolution imagery and 3D laser point cloud data

    NASA Astrophysics Data System (ADS)

    Ni, Nina; Chen, Ninghua; Chen, Jianyu

    2014-09-01

    High resolution satellite imagery and 3D laser point cloud data provide precise geometry, rich spectral information and clear texture of feature. The segmentation of high resolution remote sensing images and 3D laser point cloud is the basis of object-oriented remote sensing image analysis, for the segmentation results will directly influence the accuracy of subsequent analysis and discrimination. Currently, there still lacks a common segmentation theory to support these algorithms. So when we face a specific problem, we should determine applicability of the segmentation method through segmentation accuracy assessment, and then determine an optimal segmentation. To today, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation and supervised evaluation. For providing a more objective evaluation result, we have carried out following work. Analysis and comparison previous proposed image segmentation accuracy evaluation methods, which are area-based metrics, location-based metrics and combinations metrics. 3D point cloud data, which was gathered by Reigl VZ1000, was used to make two-dimensional transformation of point cloud data. The object-oriented segmentation result of aquaculture farm, building and farmland polygons were used as test object and adopted to evaluate segmentation accuracy.

  13. 3D Point Correspondence by Minimum Description Length in Feature Space.

    PubMed

    Chen, Jiun-Hung; Zheng, Ke Colin; Shapiro, Linda G

    2010-01-01

    Finding point correspondences plays an important role in automatically building statistical shape models from a training set of 3D surfaces. For the point correspondence problem, Davies et al. [1] proposed a minimum-description-length-based objective function to balance the training errors and generalization ability. A recent evaluation study [2] that compares several well-known 3D point correspondence methods for modeling purposes shows that the MDL-based approach [1] is the best method. We adapt the MDL-based objective function for a feature space that can exploit nonlinear properties in point correspondences, and propose an efficient optimization method to minimize the objective function directly in the feature space, given that the inner product of any vector pair can be computed in the feature space. We further employ a Mercer kernel [3] to define the feature space implicitly. A key aspect of our proposed framework is the generalization of the MDL-based objective function to kernel principal component analysis (KPCA) [4] spaces and the design of a gradient-descent approach to minimize such an objective function. We compare the generalized MDL objective function on KPCA spaces with the original one and evaluate their abilities in terms of reconstruction errors and specificity. From our experimental results on different sets of 3D shapes of human body organs, the proposed method performs significantly better than the original method. PMID:25328917

  14. Octree-Based SIMD Strategy for Icp Registration and Alignment of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Eggert, D.; Dalyot, S.

    2012-07-01

    Matching and fusion of 3D point clouds, such as close range laser scans, is important for creating an integrated 3D model data infrastructure. The Iterative Closest Point algorithm for alignment of point clouds is one of the most commonly used algorithms for matching of rigid bodies. Evidently, scans are acquired from different positions and might present different data characterization and accuracies, forcing complex data-handling issues. The growing demand for near real-time applications also introduces new computational requirements and constraints into such processes. This research proposes a methodology to solving the computational and processing complexities in the ICP algorithm by introducing specific performance enhancements to enable more efficient analysis and processing. An Octree data structure together with the caching of localized Delaunay triangulation-based surface meshes is implemented to increase computation efficiency and handling of data. Parallelization of the ICP process is carried out by using the Single Instruction, Multiple Data processing scheme - based on the Divide and Conquer multi-branched paradigm - enabling multiple processing elements to be performed on the same operation on multiple data independently and simultaneously. When compared to the traditional non-parallel list processing the Octree-based SIMD strategy showed a sharp increase in computation performance and efficiency, together with a reliable and accurate alignment of large 3D point clouds, contributing to a qualitative and efficient application.

  15. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  16. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  17. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  18. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a

  19. A formal classification of 3D medial axis points and their local geometry.

    PubMed

    Giblin, Peter; Kimia, Benjamin B

    2004-02-01

    This paper proposes a novel hypergraph skeletal representation for 3D shape based on a formal derivation of the generic structure of its medial axis. By classifying each skeletal point by its order of contact, we show that, generically, the medial axis consists of five types of points, which are then organized into sheets, curves, and points: 1) sheets (manifolds with boundary) which are the locus of bitangent spheres with regular tangency A1(2) (Ak(n) notation means n distinct k-fold tangencies of the sphere of contact, as explained in the text); two types of curves, 2) the intersection curve of three sheets and the locus of centers of tritangent spheres, A1(3), and 3) the boundary of sheets, which are the locus of centers of spheres whose radius equals the larger principal curvature, i.e., higher order contact A3 points; and two types of points, 4) centers of quad-tangent spheres, A1(4), and 5) centers of spheres with one regular tangency and one higher order tangency, A1A3. The geometry of the 3D medial axis thus consists of sheets (A1(2)) bounded by one type of curve (A3) on their free end, which corresponds to ridges on the surface, and attached to two other sheets at another type of curve (A1(3)), which support a generalized cylinder description. The A3 curves can only end in A1A3 points where they must meet an A1(3) curve. The A1(3) curves meet together in fours at an A1(4) point. This formal result leads to a compact representation for 3D shape, referred to as the medial axis hypergraph representation consisting of nodes (A1(4) and A1A3 points), links between pairs of nodes (A1(3) and A3 curves) and hyperlinks between groups of links (A1(2) sheets). The description of the local geometry at nodes by itself is sufficient to capture qualitative aspects of shapes, in analogy to 2D. We derive a pointwise reconstruction formula to reconstruct a surface from this medial axis hypergraph together with the radius function. Thus, this information completely

  20. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  1. Reconstructing 3D coastal cliffs from airborne oblique photographs without ground control points

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.

    2014-05-01

    Coastal cliff collapse hazard assessment requires measuring cliff face topography at regular intervals. Terrestrial laser scanner techniques have proven useful so far but are expensive to use either through purchasing the equipment or through survey subcontracting. In addition, terrestrial laser surveys take time which is sometimes incompatible with the time during with the beach is accessible at low-tide. By comparison, structure from motion techniques (SFM) are much less costly to implement, and if airborne, acquisition of several kilometers of coastline can be done in a matter of minutes. In this paper, the potential of GPS-tagged oblique airborne photographs and SFM techniques is examined to reconstruct chalk cliff dense 3D point clouds without Ground Control Points (GCP). The focus is put on comparing the relative 3D point of views reconstructed by Visual SFM with their synchronous Solmeta Geotagger Pro2 GPS locations using robust estimators. With a set of 568 oblique photos, shot from the open door of an airplane with a triplet of synchronized Nikon D7000, GPS and SFM-determined view point coordinates converge to X: ±31.5 m; Y: ±39.7 m; Z: ±13.0 m (LE66). Uncertainty in GPS position affects the model scale, angular attitude of the reference frame (the shoreline ends up tilted by 2°) and absolute positioning. Ground Control Points cannot be avoided to orient such models.

  2. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  3. Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography.

    PubMed

    Accorsi, Roberto

    2005-10-01

    Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging. PMID:16231793

  4. Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto

    2005-10-01

    Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging.

  5. Lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns

    NASA Astrophysics Data System (ADS)

    Dong, Pinliang

    2009-10-01

    Spatial scale plays an important role in many fields. As a scale-dependent measure for spatial heterogeneity, lacunarity describes the distribution of gaps within a set at multiple scales. In Earth science, environmental science, and ecology, lacunarity has been increasingly used for multiscale modeling of spatial patterns. This paper presents the development and implementation of a geographic information system (GIS) software extension for lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns. Depending on the application requirement, lacunarity analysis can be performed in two modes: global mode or local mode. The extension works for: (1) binary (1-bit) and grey-scale datasets in any raster format supported by ArcGIS and (2) 1D, 2D, and 3D point datasets as shapefiles or geodatabase feature classes. For more effective measurement of lacunarity for different patterns or processes in raster datasets, the extension allows users to define an area of interest (AOI) in four different ways, including using a polygon in an existing feature layer. Additionally, directionality can be taken into account when grey-scale datasets are used for local lacunarity analysis. The methodology and graphical user interface (GUI) are described. The application of the extension is demonstrated using both simulated and real datasets, including Brodatz texture images, a Spaceborne Imaging Radar (SIR-C) image, simulated 1D points on a drainage network, and 3D random and clustered point patterns. The options of lacunarity analysis and the effects of polyline arrangement on lacunarity of 1D points are also discussed. Results from sample data suggest that the lacunarity analysis extension can be used for efficient modeling of spatial patterns at multiple scales.

  6. Biview learning for human posture segmentation from 3D points cloud.

    PubMed

    Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng

    2014-01-01

    Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. PMID:24465721

  7. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  8. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  9. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  10. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  11. An investigation of pointing postures in a 3D stereoscopic environment.

    PubMed

    Lin, Chiuhsiang Joe; Ho, Sui-Hua; Chen, Yan-Jyun

    2015-05-01

    Many object pointing and selecting techniques for large screens have been proposed in the literature. There is a lack of quantitative evidence suggesting proper pointing postures for interacting with stereoscopic targets in immersive virtual environments. The objective of this study was to explore users' performances and experiences of using different postures while interacting with 3D targets remotely in an immersive stereoscopic environment. Two postures, hand-directed and gaze-directed pointing methods, were compared in order to investigate the postural influences. Two stereo parallaxes, negative and positive parallaxes, were compared for exploring how target depth variances would impact users' performances and experiences. Fifteen participants were recruited to perform two interactive tasks, tapping and tracking tasks, to simulate interaction behaviors in the stereoscopic environment. Hand-directed pointing is suggested for both tapping and tracking tasks due to its significantly better overall performance, less muscle fatigue, and better usability. However, a gaze-directed posture is probably a better alternative than hand-directed pointing for tasks with high accuracy requirements in home-in phases. Additionally, it is easier for users to interact with targets with negative parallax than with targets with positive parallax. Based on the findings of this research, future applications involving different pointing techniques should consider both pointing performances and postural effects as a result of pointing task precision requirements and potential postural fatigue. PMID:25683543

  12. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  13. Molecular surface point environments for virtual screening and the elucidation of binding patterns (MOLPRINT 3D).

    PubMed

    Bender, Andreas; Mussa, Hamse Y; Gill, Gurprem S; Glen, Robert C

    2004-12-16

    A novel method (MOLPRINT 3D) for virtual screening and the elucidation of ligand-receptor binding patterns is introduced that is based on environments of molecular surface points. The descriptor uses points relative to the molecular coordinates, thus it is translationally and rotationally invariant. Due to its local nature, conformational variations cause only minor changes in the descriptor. If surface point environments are combined with the Tanimoto coefficient and applied to virtual screening, they achieve retrieval rates comparable to that of two-dimensional (2D) fingerprints. The identification of active structures with minimal 2D similarity ("scaffold hopping") is facilitated. In combination with information-gain-based feature selection and a naive Bayesian classifier, information from multiple molecules can be combined and classification performance can be improved. Selected features are consistent with experimentally determined binding patterns. Examples are given for angiotensin-converting enzyme inhibitors, 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors, and thromboxane A2 antagonists. PMID:15588092

  14. 3D thermo-mechanical models of continental breakup and transition from rifting to continental break-up and spreading

    NASA Astrophysics Data System (ADS)

    Koptev, Alexander; Burov, Evgueni; Gerya, Taras

    2014-05-01

    We conducted high-resolution 3D thermo-mechanical numerical modeling experiments to explore evolution and styles of plume-activated rifting in presence of preexisting far-field tectonic stress/strain field and tectonic heritage (in form of cratonic blocks embedded in «normal lithosphere»). The experiments demonstrate strong dependence of rifting style on preexisting far-field tectonic stress/strain field and initial thermo-rheological profile, as well as on the tectonic heritage. The models with homogeneous lithosphere demonstrate strongly non-linear impact of far-field extension rates on timing of break-up processes. Experiments with relatively fast far-field extension (6 mm/y) show intensive normal fault localization in crust and uppermost mantle above the zones of plume-head emplacement some 15-20 Myrs after the onset of the experiment. When plume head material reaches the bottom of the continental crust (at ~25 Myrs), the latter is rapidly ruptured (<1 Myrs) and several steady oceanic floor spreading centers develop. Slower (3 mm/y) far-field velocities result in disproportionally longer break-up time (from 60 to 70 Myrs depending on initial isoterm at the crust bottom). Although in all experiments with homogeneous lithosphere spreading centers have similar orientation perpendicular to the direction of far-field extension, their number and spatial location are different for different extension rates and thermo-rheological structures of the lithosphere. On the contrary, in case of normal lithosphere containing embedded cratonic block, spreading zones develop symmetrically, embracing cratonic micro-plate along its long sides. Presence of cratonic blocks leads to splitting of the plume head onto initially nearly symmetrical parts, each of which flows towards beneath the craton borders. This craton-controlled distribution of plume material causes the crustal strain localization and uprise of plume material along the craton boundaries. Though there is a net

  15. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  16. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  17. PointCloudXplore: a visualization tool for 3D gene expressiondata

    SciTech Connect

    Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2006-10-01

    The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.

  18. Existence of two MHD reconnection modes in a solar 3D magnetic null point topology

    NASA Astrophysics Data System (ADS)

    Pariat, Etienne; Antiochos, Spiro; DeVore, C. Richard; Dalmasse, Kévin

    2012-07-01

    Magnetic topologies with a 3D magnetic null point are common in the solar atmosphere and occur at different spatial scales: such structures can be associated with some solar eruptions, with the so-called pseudo-streamers, and with numerous coronal jets. We have recently developed a series of numerical experiments that model magnetic reconnection in such configurations in order to study and explain the properties of jet-like features. Our model uses our state-of-the-art adaptive-mesh MHD solver ARMS. Energy is injected in the system by line-tied motion of the magnetic field lines in a corona-like configuration. We observe that, in the MHD framework, two reconnection modes eventually appear in the course of the evolution of the system. A very impulsive one, associated with a highly dynamic and fully 3D current sheet, is associated with the energetic generation of a jet. Before and after the generation of the jet, a quasi-steady reconnection mode, more similar to the standard 2D Sweet-Parker model, presents a lower global reconnection rate. We show that the geometry of the magnetic configuration influences the trigger of one or the other mode. We argue that this result carries important implications for the observed link between observational features such as solar jets, solar plumes, and the emission of coronal bright points.

  19. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  20. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  1. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  2. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  3. Unconventional superconductivity at mesoscopic point contacts on the 3D Dirac semimetal Cd3As2.

    PubMed

    Aggarwal, Leena; Gaurav, Abhishek; Thakur, Gohil S; Haque, Zeba; Ganguli, Ashok K; Sheet, Goutam

    2016-01-01

    Three-dimensional (3D) Dirac semimetals exist close to topological phase boundaries which, in principle, should make it possible to drive them into exotic new phases, such as topological superconductivity, by breaking certain symmetries. A practical realization of this idea has, however, hitherto been lacking. Here we show that the mesoscopic point contacts between pure silver (Ag) and the 3D Dirac semimetal Cd3As2 (ref. ) exhibit unconventional superconductivity with a critical temperature (onset) greater than 6 K whereas neither Cd3As2 nor Ag are superconductors. A gap amplitude of 6.5 meV is measured spectroscopically in this phase that varies weakly with temperature and survives up to a remarkably high temperature of 13 K, indicating the presence of a robust normal-state pseudogap. The observations indicate the emergence of a new unconventional superconducting phase that exists in a quantum mechanically confined region under a point contact between a Dirac semimetal and a normal metal. PMID:26524131

  4. 3D shape descriptors for face segmentation and fiducial points detection: an anatomical-based analysis

    NASA Astrophysics Data System (ADS)

    Salazar, Augusto E.; Cerón, Alexander; Prieto, Flavio A.

    2011-03-01

    The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied. The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints. Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template composed by 28 anatomical regions, is used to segment the models and to extract the location of different landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions and to identify key points on the facial surface. The experiment includes testing with data from neutral faces and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form (BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices, were analyzed.

  5. Inter-point procrustes: identifying regional and large differences in 3D anatomical shapes.

    PubMed

    Lekadir, Karim; Frangi, Alejandro F; Yang, Guang-Zhong

    2012-01-01

    This paper presents a new approach for the robust alignment and interpretation of 3D anatomical structures with large and localized shape differences. In such situations, existing techniques based on the well-known Procrustes analysis can be significantly affected due to the introduced non-Gaussian distribution of the residuals. In the proposed technique, influential points that induce large dissimilarities are identified and displaced with the aim to obtain an intermediate template with an improved distribution of the residuals. The key element of the algorithm is the use of pose invariant shape variables to robustly guide both the influential point detection and displacement steps. The intermediate template is then used as the basis for the estimation of the final pose parameters between the source and destination shapes, enabling to effectively highlight the regional differences of interest. The validation using synthetic and real datasets of different morphologies demonstrates robustness up-to 50% regional differences and potential for shape classification. PMID:23286119

  6. 3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Vaid, Thomas P.

    2014-01-01

    Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…

  7. A multi-resolution fractal additive scheme for blind watermarking of 3D point data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Wilder, Kathy; Fox, Kevin

    2013-05-01

    We present a fractal feature space for 3D point watermarking to make geospatial systems more secure. By exploiting the self similar nature of fractals, hidden information can be spatially embedded in point cloud data in an acceptable manner as described within this paper. Our method utilizes a blind scheme which provides automatic retrieval of the watermark payload without the need of the original cover data. Our method for locating similar patterns and encoding information in LiDAR point cloud data is accomplished through a look-up table or code book. The watermark is then merged into the point cloud data itself resulting in low distortion effects. With current advancements in computing technologies, such as GPGPUs, fractal processing is now applicable for processing of big data which is present in geospatial as well as other systems. This watermarking technique described within this paper can be important for systems where point data is handled by numerous aerial collectors including analysts use for systems such as a National LiDAR Data Layer.

  8. Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques

    PubMed Central

    Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li, Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva

    2011-01-01

    Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE®). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to ∼13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE® system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by ∼9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT∼18% and ∼42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE® and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%–4%). PDD values at 2 cm depth varied from ∼72% for the 40 mm field, down to ∼55% for the 1 mm field. EBT and PRESAGE® PDDs agreed within ∼3% in the typical therapy region (1–4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm). These results indicate good overall consistency between ion-chamber, EBT

  9. Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques

    SciTech Connect

    Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva

    2011-12-15

    Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE registered ). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to {approx}13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE registered system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by {approx}9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT{approx}18% and {approx}42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE registered and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%-4%). PDD values at 2 cm depth varied from {approx}72% for the 40 mm field, down to {approx}55% for the 1 mm field. EBT and PRESAGE registered PDDs agreed within {approx}3% in the typical therapy region (1-4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm

  10. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  11. Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

    PubMed Central

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  12. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  13. The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ben Hassen, M. F.; Erhard, K.; Potthast, R.

    2006-02-01

    We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.

  14. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  15. Processing 3D flash LADAR point-clouds in real-time for flight applications

    NASA Astrophysics Data System (ADS)

    Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.

    2007-04-01

    Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.

  16. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  17. Comparison of clinical bracket point registration with 3D laser scanner and coordinate measuring machine

    PubMed Central

    Nouri, Mahtab; Farzan, Arash; Baghban, Ali Reza Akbarzadeh; Massudi, Reza

    2015-01-01

    OBJECTIVE: The aim of the present study was to assess the diagnostic value of a laser scanner developed to determine the coordinates of clinical bracket points and to compare with the results of a coordinate measuring machine (CMM). METHODS: This diagnostic experimental study was conducted on maxillary and mandibular orthodontic study casts of 18 adults with normal Class I occlusion. First, the coordinates of the bracket points were measured on all casts by a CMM. Then, the three-dimensional coordinates (X, Y, Z) of the bracket points were measured on the same casts by a 3D laser scanner designed at Shahid Beheshti University, Tehran, Iran. The validity and reliability of each system were assessed by means of intraclass correlation coefficient (ICC) and Dahlberg's formula. RESULTS: The difference between the mean dimension and the actual value for the CMM was 0.0066 mm. (95% CI: 69.98340, 69.99140). The mean difference for the laser scanner was 0.107 ± 0.133 mm (95% CI: -0.002, 0.24). In each method, differences were not significant. The ICC comparing the two methods was 0.998 for the X coordinate, and 0.996 for the Y coordinate; the mean difference for coordinates recorded in the entire arch and for each tooth was 0.616 mm. CONCLUSION: The accuracy of clinical bracket point coordinates measured by the laser scanner was equal to that of CMM. The mean difference in measurements was within the range of operator errors. PMID:25741826

  18. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  19. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  20. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  1. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  2. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  3. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-11-15

    achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  4. Structural analysis of San Leo (RN, Italy) east and north cliffs using 3D point clouds

    NASA Astrophysics Data System (ADS)

    Spreafico, Margherita Cecilia; Bacenetti, Marco; Borgatti, Lisa; Cignetti, Martina; Giardino, Marco; Perotti, Luigi

    2013-04-01

    The town of San Leo, like many others in the historical region of Montefeltro (Northern Apennines, Italy), was built in medieval period on a calcarenite and sandstone slab, bordered by subvertical and overhanging cliffs up to 100 m high, for defense purposes. The slab and the underlying clayey substratum show widespread landslide phenomena: the first is tectonized and crossed by joints and faults, and it is affected by lateral spreading with associated rock falls, topples and tilting. Moreover, the underlying clayey substratum is involved in plastic movements, like earth flows and slides. The main cause of instability in the area, which brings about these movements, is the high deformability contrast between the plate and the underlying clays. The aim of our research is to set up a numerical model that can well describe the processes and take into account the different factors that influence the evolution of the movements. One of these factors is certainly the structural setting of the slab, characterized by several joints and faults; in order to better identify and detect the main joint sets affecting the study area a structural analysis was performed. Up to date, a series of scans of San Leo cliff taken in 2008 and 2011, with a Riegl Z420i were analyzed. Initially, we chose a test area, located in the east side of the cliff, in which analyses were performed using two different softwares: COLTOP 3D and Polyworks. We repeated the analysis using COLTOP for all the east wall and for a part of the north wall, including an area affected by a rock fall in 2006. In the test area we identified five sets with different dips and dip directions. The analysis of the east and north walls permitted to identify eight sets (seven plus the bedding) of discontinuities. We compared these results with previous ones from surveys taken by others authors in some areas and with some preliminary data from a traditional geological survey of the whole area. With traditional methods only a

  5. Non-Newtonian Fluids Spreading with Surface Tension Effect: 3D Numerical Analysis Using FEM and Experimental Study

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Kieweg, Sarah

    2010-11-01

    Gravity-driven thin film flow down an incline is studied for optimal design of polymeric drug delivery vehicles, such as anti-HIV topical microbicides. We develop a 3D FEM model using non-Newtonian mechanics to model the flow of gels in response to gravity, surface tension and shear-thinning. Constant volume setup is applied within the lubrication approximation scope. The lengthwise profiles of the 3D model agree with our previous 2D finite difference model, while the transverse contact line patterns of the 3D model are compared to the experiments. With incorporation of surface tension, capillary ridges are observed at the leading front in both 2D and 3D models. Previously published studies show that capillary ridge can amplify the fingering instabilities in transverse direction. Sensitivity studies (2D & 3D) and experiments are carried out to describe the influence of surface tension and shear-thinning on capillary ridge and fingering instabilities.

  6. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  7. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals.

    PubMed

    Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiong-Jun; Xie, X C; Wei, Jian; Wang, Jian

    2016-01-01

    Three-dimensional (3D) Dirac semimetals, which possess 3D linear dispersion in the electronic structure as a bulk analogue of graphene, have lately generated widespread interest in both materials science and condensed matter physics. Recently, crystalline Cd3As2 has been proposed and proved to be a 3D Dirac semimetal that can survive in the atmosphere. Here, by using point contact spectroscopy measurements, we observe exotic superconductivity around the point contact region on the surface of Cd3As2 crystals. The zero-bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric around zero bias suggest p-wave-like unconventional superconductivity. Considering the topological properties of 3D Dirac semimetals, our findings may indicate that Cd3As2 crystals under certain conditions could be topological superconductors, which are predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material. PMID:26524129

  8. Surface-based matching of 3D point clouds with variable coordinates in source and target system

    NASA Astrophysics Data System (ADS)

    Ge, Xuming; Wunderlich, Thomas

    2016-01-01

    The automatic co-registration of point clouds, representing three-dimensional (3D) surfaces, is an important technique in 3D reconstruction and is widely applied in many different disciplines. An alternative approach is proposed here that estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface. The approach uses the nonlinear Gauss-Helmert model, minimizing the quadratically constrained least squares problem. This approach has the ability to match arbitrarily oriented 3D surfaces captured from a number of different sensors, on different time-scales and at different resolutions. In addition to the 3D surface-matching paths, the mathematical model allows the precision of the point clouds to be assessed after adjustment. The error behavior of surfaces can also be investigated based on the proposed approach. Some practical examples are presented and the results are compared with the iterative closest point and the linear least-squares approaches to demonstrate the performance and benefits of the proposed technique.

  9. Extraction and refinement of building faces in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri

    2013-10-01

    In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.

  10. The Finite-Length Line-Spread Function: An Extension To Asymmetric Point Spread Functions

    NASA Astrophysics Data System (ADS)

    Dallas, W. J.

    1988-06-01

    The point spread function (PSF) is used to characterize imaging systems. The PSF is usually not measured directly but rather the line spread function (LSF) is measured by scanning across the image of an input slit. One of the well known LSF-PSF conversion formulas is then applied.1 These formulas make the assumption that the length of the input-slit image is great compared to the PSF extent. This assumption is unfortunately unwarranted for one of the most important medical imaging devices: the x-ray image intensifier. The large extent image intensifier's PSF and the limited size of the intensifier's isoplanatic patches combine to make consideration of the finite length of the input slit important. For-mulas for calculating the PSF from a measurement of the finite-length line spread function (FLSF) have been developed for the case of a rotationally symmetric PSF.3 In this presentation we generalize the conversion formulas to cover non-symmetric PSF's.

  11. Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE)

    PubMed Central

    Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael

    2012-01-01

    Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521

  12. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  13. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  14. Point Spread Function extraction in crowded fields using blind deconvolution

    NASA Astrophysics Data System (ADS)

    Schreiber, Laura; La Camera, Andrea; Prato, Marco; Diolaiti, Emiliano; constraint on the PSFS which is an upper bound derived from the Strehl ratio (SR), to be provided together with the input data. In this contribution we show the photometric error dependence on the crowding, having simulated images generated with synthetic PSFs available from the Phase-A study of the E-ELT MCAO system (MAORY) and different crowding conditions.

    2013-12-01

    The extraction of the Point Spread Function (PSF) from astronomical %data is an important issue for data reduction packages for stellar %photometry that use PSF fitting. High resolution Adaptive Optics %images are characterized by a highly structured PSF that cannot be %represented by any simple analytical model. Even a numerical PSF %extracted from the frame can be affected by the field crowding %effects. In this paper we use blind deconvolution in order to find an %approximation of both the unknown object and the unknown PSF.In %particular we adopt an iterative inexact alternating minimization %method where each iteration (that we called outer iteration) consists %in alternating an update of the object and of the PSF by means of %fixed numbers of (inner) iterations of the Scaled Gradient Projection %(SGP) method. The use of SGP allows the introduction of different %constraints on the object and on the PSF. In particular, we introduce

  15. The point spread function reconstruction by using Moffatlets — I

    NASA Astrophysics Data System (ADS)

    Li, Bai-Shun; Li, Guo-Liang; Cheng, Jun; Peterson, John; Cui, Wei

    2016-09-01

    Shear measurement is a crucial task in current and future weak lensing survey projects. The reconstruction of the point spread function (PSF) is one of the essential steps involved in this process. In this work, we present three different methods, Gaussianlets, Moffatlets and Expectation Maximization Principal Component Analysis (EMPCA), and quantify their efficiency on PSF reconstruction using four sets of simulated Large Synoptic Survey Telescope (LSST) star images. Gaussianlets and Moffatlets are two different sets of basis functions whose profiles are based on Gaussian and Moffat functions respectively. EMPCA is a statistical method performing an iterative procedure to find the principal components (PCs) of an ensemble of star images. Our tests show that: (1) Moffatlets always perform better than Gaussianlets. (2) EMPCA is more compact and flexible, but the noise existing in the PCs will contaminate the size and ellipticity of PSF. By contrast, Moffatlets keep the size and ellipticity very well.

  16. Status of point spread function determination for Keck adaptive optics

    NASA Astrophysics Data System (ADS)

    Ragland, S.; Jolissaint, L.; Wizinowich, P.; Neyman, C.

    2014-07-01

    There is great interest in the adaptive optics (AO) science community to overcome the limitations imposed by incomplete knowledge of the point spread function (PSF). To address this limitation a program has been initiated at the W. M. Keck Observatory (WMKO) to demonstrate PSF determination for observations obtained with Keck AO science instruments. This paper aims to give a broad view of the progress achieved in this area. The concept and the implementation are briefly described. The results from on-sky on-axis NGS AO measurements using the NIRC2 science instrument are presented. On-sky performance of the technique is illustrated by comparing the reconstructed PSFs to NIRC2 PSFs. Accuracy of the reconstructed PSFs in terms of Strehl ratio and FWHM are discussed. Science cases for the first phase of science verification have been identified. More technical details of the program are presented elsewhere in the conference.

  17. A Point Spread Function for the EPOXI Mission

    NASA Technical Reports Server (NTRS)

    Barry, Richard K.

    2010-01-01

    The Extrasolar Planet Observation Characterization and the Deep Impact Extended Investigation missions (EPOXI) are currently observing the transits of exoplanets, two comet nuclei at short range, and the Earth and Mars using the High Resolution Instrument (HRI) - a 0.3 m f/35 telescope on the Deep Impact probe. The HRI is in a permanently defocused state with the instrument pOint of focus about 0.6 cm before the focal plane due to the use of a reference flat mirror that took a power during ground thermal-vacuum testing. Consequently, the point spread function (PSF) covers approximately nine pixels FWHM and is characterized by a patch with three-fold symmetry due to the three-point support structures of the primary and secondary mirrors. The PSF is also strongly color dependent varying in shape and size with change in filtration and target color. While defocus is highly desirable for exoplanet transit observations to limit sensitivity to intra-pixel variation, it is suboptimal for observations of spatially resolved targets. Consequently, all images used in our analysis of such objects were deconvolved with an instrument PSF. The instrument PSF is also being used to optimize transit analysis. We discuss development and usage of an instrument PSF for these observations.

  18. A 3D point-kernel multiple scatter model for parallel-beam SPECT based on a gamma-ray buildup factor

    NASA Astrophysics Data System (ADS)

    Marinkovic, Predrag; Ilic, Radovan; Spaic, Rajko

    2007-09-01

    A three-dimensional (3D) point-kernel multiple scatter model for point spread function (PSF) determination in parallel-beam single-photon emission computed tomography (SPECT), based on a dose gamma-ray buildup factor, is proposed. This model embraces nonuniform attenuation in a voxelized object of imaging (patient body) and multiple scattering that is treated as in the point-kernel integration gamma-ray shielding problems. First-order Compton scattering is done by means of the Klein-Nishina formula, but the multiple scattering is accounted for by making use of a dose buildup factor. An asset of the present model is the possibility of generating a complete two-dimensional (2D) PSF that can be used for 3D SPECT reconstruction by means of iterative algorithms. The proposed model is convenient in those situations where more exact techniques are not economical. For the proposed model's testing purpose calculations (for the point source in a nonuniform scattering object for parallel beam collimator geometry), the multiple-order scatter PSF generated by means of the proposed model matched well with those using Monte Carlo (MC) simulations. Discrepancies are observed only at the exponential tails mostly due to the high statistic uncertainty of MC simulations in this area, but not because of the inappropriateness of the model.

  19. 3D point cloud classification of complex natural scenes using a multi-scale dimensionality criterion: applications in geomorphology

    NASA Astrophysics Data System (ADS)

    Brodu, N.; Lague, D.

    2012-04-01

    3D point clouds derived from Terrestrial laser scanner (TLS) and photogrammetry are now frequently used in geomorphology to achieve greater precision and completeness in surveying natural environments than what was feasible a few years ago. Yet, scientific exploitation of these large and complex 3D data sets remains difficult and would benefit from automated classification procedures that could pre-process the raw point cloud data. Typical examples of applications are the separation of vegetation from ground or cliff outcrops, the distinction between fresh rock surfaces and rockfall, the classification of flat or rippled bed, and more generally the classification of 3D surfaces according to their morphology directly in the native point cloud data organization rather than after a sometime cumbersome meshing or gridding phase. Yet developing such classification procedures remains difficult because of the 3D nature of the data generated from ground based systems (as opposed to the 2.5D nature of aerial lidar data) and the heterogeneity and complexity of natural surfaces. We present a new software suite (CANUPO) that can classify raw point clouds in 3D based on a new geometrical measure: the multi-scale dimensionality. This method exploits the multi-resolution characteristics high-resolution datasets covering scales ranging from a few centimeters to hundred of meters. The dimensionality characterizes the local 3D organization of the point cloud within spheres centered on the measured points and varies from being 1D (points set along a line), 2D (points forming a plane) to the full 3D volume. By varying the diameter of the sphere, we track how the local cloud geometry behaves across scales (typically ranging from 5 cm to 1 m). We present the technique and illustrate its efficiency on two examples : separating riparian vegetation from ground, and classifying a steep mountain stream as vegetation, rock, gravel or water surface. In these two cases, separating the

  20. A 3-D numerical study of pinhole diffraction to predict the accuracy of EUV point diffraction interferometry

    SciTech Connect

    Goldberg, K.A. |; Tejnil, E.; Bokor, J. |

    1995-12-01

    A 3-D electromagnetic field simulation is used to model the propagation of extreme ultraviolet (EUV), 13-nm, light through sub-1500 {Angstrom} dia pinholes in a highly absorptive medium. Deviations of the diffracted wavefront phase from an ideal sphere are studied within 0.1 numerical aperture, to predict the accuracy of EUV point diffraction interferometersused in at-wavelength testing of nearly diffraction-limited EUV optical systems. Aberration magnitudes are studied for various 3-D pinhole models, including cylindrical and conical pinhole bores.

  1. PIV Measurement of Transient 3-D (Liquid and Gas Phases) Flow Structures Created by a Spreading Flame over 1-Propanol

    NASA Technical Reports Server (NTRS)

    Hassan, M. I.; Kuwana, K.; Saito, K.

    2001-01-01

    In the past, we measured three-D flow structure in the liquid and gas phases that were created by a spreading flame over liquid fuels. In that effort, we employed several different techniques including our original laser sheet particle tracking (LSPT) technique, which is capable of measuring transient 2-D flow structures. Recently we obtained a state-of-the-art integrated particle image velocimetry (IPIV), whose function is similar to LSPT, but it has an integrated data recording and processing system. To evaluate the accuracy of our IPIV system, we conducted a series of flame spread tests using the same experimental apparatus that we used in our previous flame spread studies and obtained a series of 2-D flow profiles corresponding to our previous LSPT measurements. We confirmed that both LSPT and IPIV techniques produced similar data, but IPIV data contains more detailed flow structures than LSPT data. Here we present some of newly obtained IPIV flow structure data, and discuss the role of gravity in the flame-induced flow structures. Note that the application of IPIV to our flame spread problems is not straightforward, and it required several preliminary tests for its accuracy including this IPIV comparison to LSPT.

  2. Toward 3D Printing of Medical Implants: Reduced Lateral Droplet Spreading of Silicone Rubber under Intense IR Curing.

    PubMed

    Stieghorst, Jan; Majaura, Daniel; Wevering, Hendrik; Doll, Theodor

    2016-03-01

    The direct fabrication of silicone-rubber-based individually shaped active neural implants requires high-speed-curing systems in order to prevent extensive spreading of the viscous silicone rubber materials during vulcanization. Therefore, an infrared-laser-based test setup was developed to cure the silicone rubber materials rapidly and to evaluate the resulting spreading in relation to its initial viscosity, the absorbed infrared radiation, and the surface tensions of the fabrication bed's material. Different low-adhesion materials (polyimide, Parylene-C, polytetrafluoroethylene, and fluorinated ethylenepropylene) were used as bed materials to reduce the spreading of the silicone rubber materials by means of their well-known weak surface tensions. Further, O2-plasma treatment was performed on the bed materials to reduce the surface tensions. To calculate the absorbed radiation, the emittance of the laser was measured, and the absorptances of the materials were investigated with Fourier transform infrared spectroscopy in attenuated total reflection mode. A minimum silicone rubber spreading of 3.24% was achieved after 2 s curing time, indicating the potential usability of the presented high-speed-curing process for the direct fabrication of thermal-curing silicone rubbers. PMID:26967063

  3. Attribute-based point cloud visualization in support of 3-D classification

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Otepka, Johannes; Kania, Adam

    2016-04-01

    Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large

  4. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  5. Optimizing the rotating point spread function by SLM aided spiral phase modulation

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Bouchal, Z.

    2014-12-01

    We demonstrate the vortex point spread function (PSF) whose shape and the rotation sensitivity to defocusing can be controlled by a phase-only modulation implemented in the spatial or frequency domains. Rotational effects are studied in detail as a result of the spiral modulation carried out in discrete radial and azimuthal sections with different topological charges. As the main result, a direct connection between properties of the PSF and the parameters of the spiral mask is found and subsequently used for an optimal shaping of the PSF and control of its defocusing rotation rate. Experiments on the PSF rotation verify a good agreement with theoretical predictions and demonstrate potential of the method for applications in microscopy, tracking of particles and 3D imaging.

  6. Point spread functions for the Solar optical telescope onboard Hinode

    NASA Astrophysics Data System (ADS)

    Wedemeyer-Böhm, S.

    2008-08-01

    Aims: We investigate the combined point spread function (PSF) of the Broadband Filter Imager (BFI) and the Solar Optical Telescope (SOT) onboard the Hinode spacecraft. Methods: Observations of the Mercury transit from November 2006 and the solar eclipse(s) from 2007 are used to determine the PSFs of SOT for the blue, green, and red continuum channels of the BFI. For each channel, we calculate large grids of theoretical point spread functions by convolution of the ideal diffraction-limited PSF and Voigt profiles. These PSFs are applied to artificial images of an eclipse and a Mercury transit. The comparison of the resulting artificial intensity profiles across the terminator and the corresponding observed profiles yields a quality measure for each case. The optimum PSF for each observed image is indicated by the best fit. Results: The observed images of the Mercury transit and the eclipses exhibit a clear proportional relation between the residual intensity and the overall light level in the telescope. In addition, there is an anisotropic stray-light contribution. These two factors make it very difficult to pin down a single unique PSF that can account for all observational conditions. Nevertheless, the range of possible PSF models can be limited by using additional constraints like the pre-flight measurements of the Strehl ratio. Conclusions: The BFI/SOT operate close to the diffraction limit and have only a rather small stray-light contribution. The FWHM of the PSF is broadened by only ~1% with respect to the diffraction-limited case, while the overall Strehl ratio is ~0.8. In view of the large variations - best seen in the residual intensities of eclipse images - and the dependence on the overall light level and position in the FOV, a range of PSFs should be considered instead of a single PSF per wavelength. The individual PSFs of that range allow then the determination of error margins for the quantity under investigation. Nevertheless, the stray

  7. Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping

    NASA Astrophysics Data System (ADS)

    Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable

  8. Examination about Influence for Precision of 3d Image Measurement from the Ground Control Point Measurement and Surface Matching

    NASA Astrophysics Data System (ADS)

    Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.

    2015-05-01

    As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made

  9. LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction

    NASA Astrophysics Data System (ADS)

    Abdullah, S. M.; Awrangjeb, M.; Lu, G.

    2014-08-01

    Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.

  10. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation. PMID:25768819

  11. On point spread function modelling: towards optimal interpolation

    NASA Astrophysics Data System (ADS)

    Bergé, Joel; Price, Sedona; Amara, Adam; Rhodes, Jason

    2012-01-01

    Point spread function (PSF) modelling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modelling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a principal components analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.

  12. Evaluation of Partially Overlapping 3D Point Cloud's Registration by using ICP variant and CloudCompare.

    NASA Astrophysics Data System (ADS)

    Rajendra, Y. D.; Mehrotra, S. C.; Kale, K. V.; Manza, R. R.; Dhumal, R. K.; Nagne, A. D.; Vibhute, A. D.

    2014-11-01

    Terrestrial Laser Scanners (TLS) are used to get dense point samples of large object's surface. TLS is new and efficient method to digitize large object or scene. The collected point samples come into different formats and coordinates. Different scans are required to scan large object such as heritage site. Point cloud registration is considered as important task to bring different scans into whole 3D model in one coordinate system. Point clouds can be registered by using one of the three ways or combination of them, Target based, feature extraction, point cloud based. For the present study we have gone through Point Cloud Based registration approach. We have collected partially overlapped 3D Point Cloud data of Department of Computer Science & IT (DCSIT) building located in Dr. Babasaheb Ambedkar Marathwada University, Aurangabad. To get the complete point cloud information of the building we have taken 12 scans, 4 scans for exterior and 8 scans for interior façade data collection. There are various algorithms available in literature, but Iterative Closest Point (ICP) is most dominant algorithms. The various researchers have developed variants of ICP for better registration process. The ICP point cloud registration algorithm is based on the search of pairs of nearest points in a two adjacent scans and calculates the transformation parameters between them, it provides advantage that no artificial target is required for registration process. We studied and implemented three variants Brute Force, KDTree, Partial Matching of ICP algorithm in MATLAB. The result shows that the implemented version of ICP algorithm with its variants gives better result with speed and accuracy of registration as compared with CloudCompare Open Source software.

  13. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  14. Localizing edges for estimating point spread function by removing outlier points

    NASA Astrophysics Data System (ADS)

    Li, Yong; Xu, Liangpeng; Jin, Hongbin; Zou, Junwei

    2016-02-01

    This paper presents an approach to detect sharp edges for estimating point spread function (PSF) of a lens. A category of PSF estimation methods detect sharp edges from low-resolution (LR) images and estimate PSF with the detected edges. Existing techniques usually rely on accurate detection of ending points of the profile normal to an edge. In practice, however, it is often very difficult to localize profiles accurately. Inaccurately localized profiles generate a poor PSF estimation. We employ the Random Sample Consensus (RANSAC) algorithm to rule out outlier points. In RANSAC, prior knowledge about a pattern shape is incorporated, and the edge points lying far away from the pattern shape will be removed. The proposed method is tested on images of saddle patterns. Experimental results show that the proposed method can robustly localize sharp edges from LR saddle pattern images and yield accurate PSF estimation.

  15. Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.

    2016-06-01

    Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.

  16. Point Spread Function (PSF) noise filter strategy for geiger mode LiDAR

    NASA Astrophysics Data System (ADS)

    Smith, O'Neil; Stark, Robert; Smith, Philip; St. Romain, Randall; Blask, Steven

    2013-05-01

    LiDAR is an efficient optical remote sensing technology that has application in geography, forestry, and defense. The effectiveness is often limited by signal-to-noise ratio (SNR). Geiger mode avalanche photodiode (APD) detectors are able to operate above critical voltage, and a single photoelectron can initiate the current surge, making the device very sensitive. These advantages come at the expense of requiring computationally intensive noise filtering techniques. Noise is a problem which affects the imaging system and reduces the capability. Common noise-reduction algorithms have drawbacks such as over aggressive filtering, or decimating in order to improve quality and performance. In recent years, there has been growing interest on GPUs (Graphics Processing Units) for their ability to perform powerful massive parallel processing. In this paper, we leverage this capability to reduce the processing latency. The Point Spread Function (PSF) filter algorithm is a local spatial measure that has been GPGPU accelerated. The idea is to use a kernel density estimation technique for point clustering. We associate a local likelihood measure with every point of the input data capturing the probability that a 3D point is true target-return photons or noise (background photons, dark-current). This process suppresses noise and allows for detection of outliers. We apply this approach to the LiDAR noise filtering problem for which we have recognized a speed-up factor of 30-50 times compared to traditional sequential CPU implementation.

  17. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3 As2 crystals

    NASA Astrophysics Data System (ADS)

    Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiongjun; Xie, Xincheng; Wei, Jian; Wang, Jian

    The 3D Dirac semimetal state is located at the topological phase boundary and can potentially be driven into other topological phases including topological insulator, topological metal and the long-pursuit topological superconductor states. Crystalline Cd3As2 has been proposed and proved to be one of 3D Dirac semimetals which can survive in atmosphere. By precisely controlled point contact (PC) measurements, we observe the exotic superconductivity in the vicinity of the point contact region on the surface of Cd3As2 crystal, which might be induced by the local pressure in the out-of-plane direction from the metallic tip for PC. The observation of zero bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric to zero bias further reveals p-wave like unconventional superconductivity in Cd3As2. Considering the special topological property of the 3D Dirac semimetal, our findings may indicate that the Cd3As2 crystal under certain conditions is a candidate of topological superconductor, which is predicted to support Majorana zero modes or gapless Majorana edge/surface modes on the boundary depending on the dimensionality of the material. This work was financially supported by the National Basic Research Program of China (Greanted Nos. 2012CB927400).

  18. Effects of cyclone diameter on performance of 1D3D cyclones: Cut point and slope

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cyclones are a commonly used air pollution abatement device for separating particulate matter (PM) from air streams in industrial processes. Several mathematical models have been proposed to predict the cut point of cyclones as cyclone diameter varies. The objective of this research was to determine...

  19. Axial magnetic anomalies over slow-spreading ridge segments: insights from numerical 3-D thermal and physical modelling

    NASA Astrophysics Data System (ADS)

    Gac, Sébastien; Dyment, Jérôme; Tisseau, Chantal; Goslin, Jean

    2003-09-01

    The axial magnetic anomaly amplitude along Mid-Atlantic Ridge segments is systematically twice as high at segment ends compared with segment centres. Various processes have been proposed to account for such observations, either directly or indirectly related to the thermal structure of the segments: (1) shallower Curie isotherm at segment centres, (2) higher Fe-Ti content at segment ends, (3) serpentinized peridotites at segment ends or (4) a combination of these processes. In this paper the contribution of each of these processes to the axial magnetic anomaly amplitude is quantitatively evaluated by achieving a 3-D numerical modelling of the magnetization distribution and a magnetic anomaly over a medium-sized, 50 km long segment. The magnetization distribution depends on the thermal structure and thermal evolution of the lithosphere. The thermal structure is calculated considering the presence of a permanent hot zone beneath the segment centre. The `best-fitting' thermal structure is determined by adjusting the parameters (shape, size, depth, etc.) of this hot zone, to fit the modelled geophysical outputs (Mantle Bouguer anomaly, maximum earthquake depths and crustal thickness) to the observations. Both the thermoremanent magnetization, acquired during the thermal evolution, and the induced magnetization, which depends on the present thermal structure, are modelled. The resulting magnetic anomalies are then computed and compared with the observed ones. This modelling exercise suggests that, in the case of aligned and slightly offset segments, a combination of higher Fe-Ti content and the presence of serpentinized peridotites at segment ends will produce the observed higher axial magnetic anomaly amplitudes over the segment ends. In the case of greater offsets, the presence of serpentinized peridotites at segment ends is sufficient to account for the observations.

  20. Historical Buildings Models and Their Handling via 3d Survey: from Points Clouds to User-Oriented Hbim

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2016-06-01

    This paper retraces some research activities and application of 3D survey techniques and Building Information Modelling (BIM) in the environment of Cultural Heritage. It describes the diffusion of as-built BIM approach in the last years in Heritage Assets management, the so-called Built Heritage Information Modelling/Management (BHIMM or HBIM), that is nowadays an important and sustainable perspective in documentation and administration of historic buildings and structures. The work focuses the documentation derived from 3D survey techniques that can be understood like a significant and unavoidable knowledge base for the BIM conception and modelling, in the perspective of a coherent and complete management and valorisation of CH. It deepens potentialities, offered by 3D integrated survey techniques, to acquire productively and quite easilymany 3D information, not only geometrical but also radiometric attributes, helping the recognition, interpretation and characterization of state of conservation and degradation of architectural elements. From these data, they provide more and more high descriptive models corresponding to the geometrical complexity of buildings or aggregates in the well-known 5D (3D + time and cost dimensions). Points clouds derived from 3D survey acquisition (aerial and terrestrial photogrammetry, LiDAR and their integration) are reality-based models that can be use in a semi-automatic way to manage, interpret, and moderately simplify geometrical shapes of historical buildings that are examples, as is well known, of non-regular and complex geometry, instead of modern constructions with simple and regular ones. In the paper, some of these issues are addressed and analyzed through some experiences regarding the creation and the managing of HBIMprojects on historical heritage at different scales, using different platforms and various workflow. The paper focuses on LiDAR data handling with the aim to manage and extract geometrical information; on

  1. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design

    NASA Astrophysics Data System (ADS)

    Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas

    2011-03-01

    The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.

  2. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of

  3. WE-F-16A-02: Design, Fabrication, and Validation of a 3D-Printed Proton Filter for Range Spreading

    SciTech Connect

    Remmes, N; Courneyea, L; Corner, S; Beltran, C; Kemp, B; Kruse, J; Herman, M; Stoker, J

    2014-06-15

    Purpose: To design, fabricate and test a 3D-printed filter for proton range spreading in scanned proton beams. The narrow Bragg peak in lower-energy synchrotron-based scanned proton beams can result in longer treatment times for shallow targets due to energy switching time and plan quality degradation due to minimum monitor unit limitations. A filter with variable thicknesses patterned on the same scale as the beam's lateral spot size will widen the Bragg peak. Methods: The filter consists of pyramids dimensioned to have a Gaussian distribution in thickness. The pyramids are 2.5mm wide at the base, 0.6 mm wide at the peak, 5mm tall, and are repeated in a 2.5mm pseudo-hexagonal lattice. Monte Carlo simulations of the filter in a proton beam were run using TOPAS to assess the change in depth profiles and lateral beam profiles. The prototypes were constrained to a 2.5cm diameter disk to allow for micro-CT imaging of promising prototypes. Three different 3D printers were tested. Depth-doses with and without the prototype filter were then measured in a ~70MeV proton beam using a multilayer ion chamber. Results: The simulation results were consistent with design expectations. Prototypes printed on one printer were clearly unacceptable on visual inspection. Prototypes on a second printer looked acceptable, but the micro-CT image showed unacceptable voids within the pyramids. Prototypes from the third printer appeared acceptable visually and on micro-CT imaging. Depth dose scans using the prototype from the third printer were consistent with simulation results. Bragg peak width increased by about 3x. Conclusions: A prototype 3D printer pyramid filter for range spreading was successfully designed, fabricated and tested. The filter has greater design flexibility and lower prototyping and production costs compared to traditional ridge filters. Printer and material selection played a large role in the successful development of the filter.

  4. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  5. D Geological Outcrop Characterization: Automatic Detection of 3d Planes (azimuth and Dip) Using LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B.

    2016-06-01

    Terrestrial laser scanning constitutes a powerful method in spatial information data acquisition and allows for geological outcrops to be captured with high resolution and accuracy. A crucial aspect for numerous geologic applications is the extraction of rock surface orientations from the data. This paper focuses on the detection of planes in rock surface data by applying a segmentation algorithm directly to a 3D point cloud. Its performance is assessed considering (1) reduced spatial resolution of data and (2) smoothing in the course of data pre-processing. The methodology is tested on simulations of progressively reduced spatial resolution defined by varying point cloud density. Smoothing of the point cloud data is implemented by modifying the neighborhood criteria during normals estima-tion. The considerable alteration of resulting planes emphasizes the influence of smoothing on the plane detection prior to the actual segmentation. Therefore, the parameter needs to be set in accordance with individual purposes and respective scales of studies. Fur-thermore, it is concluded that the quality of segmentation results does not decline even when the data volume is significantly reduced down to 10%. The azimuth and dip values of individual segments are determined for planes fit to the points belonging to one segment. Based on these results, azimuth and dip as well as strike character of the surface planes in the outcrop are assessed. Thereby, this paper contributes to a fully automatic and straightforward workflow for a comprehensive geometric description of outcrops in 3D.

  6. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  7. 3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.

    2012-12-01

    The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.

  8. Design point variation of 3-D loss and deviation for axial compressor middle stages

    NASA Technical Reports Server (NTRS)

    Roberts, William B.; Serovy, George K.; Sandercock, Donald M.

    1988-01-01

    The available data on middle-stage research compressors operating near design point are used to derive simple empirical models for the spanwise variation of three-dimensional viscous loss coefficients for middle-stage axial compressor blading. The models make it possible to quickly estimate the total loss and deviation across the blade span when the three-dimensional distribution is superimposed on the two-dimensional variation calculated for each blade element. It is noted that extrapolated estimates should be used with caution since the correlations have been derived from a limited data base.

  9. An exact solution for the 3D MHD stagnation-point flow of a micropolar fluid

    NASA Astrophysics Data System (ADS)

    Borrelli, A.; Giantesio, G.; Patria, M. C.

    2015-01-01

    The influence of a non-uniform external magnetic field on the steady three dimensional stagnation-point flow of a micropolar fluid over a rigid uncharged dielectric at rest is studied. The total magnetic field is parallel to the velocity at infinity. It is proved that this flow is possible only in the axisymmetric case. The governing nonlinear partial differential equations are reduced to a system of ordinary differential equations by a similarity transformation, before being solved numerically. The effects of the governing parameters on the fluid flow and on the magnetic field are illustrated graphically and discussed.

  10. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  11. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and

  12. Simulations of 3D Magnetic Merging: Resistive Scalings for Null Point and QSL Reconnection

    NASA Astrophysics Data System (ADS)

    Effenberger, Frederic; Craig, I. J. D.

    2016-01-01

    Starting from an exact, steady-state, force-free solution of the magnetohydrodynamic (MHD) equations, we investigate how resistive current layers are induced by perturbing line-tied three-dimensional magnetic equilibria. This is achieved by the superposition of a weak perturbation field in the domain, in contrast to studies where the boundary is driven by slow motions, like those present in photospheric active regions. Our aim is to quantify how the current structures are altered by the contribution of so-called quasi-separatrix layers (QSLs) as the null point is shifted outside the computational domain. Previous studies based on magneto-frictional relaxation have indicated that despite the severe field line gradients of the QSL, the presence of a null is vital in maintaining fast reconnection. Here, we explore this notion using highly resolved simulations of the full MHD evolution. We show that for the null-point configuration, the resistive scaling of the peak current density is close to J˜η^{-1}, while the scaling is much weaker, i.e. J˜η^{-0.4}, when only the QSL connectivity gradients provide a site for the current accumulation.

  13. Hinode observations and 3D magnetic structure of an X-ray bright point

    NASA Astrophysics Data System (ADS)

    Alexander, C. E.; Del Zanna, G.; Maclean, R. C.

    2011-02-01

    Aims: We present complete Hinode Solar Optical Telescope (SOT), X-Ray Telescope (XRT)and EUV Imaging Spectrometer (EIS) observations of an X-ray bright point (XBP) observed on the 10, 11 of October 2007 over its entire lifetime (~12 h). We aim to show how the measured plasma parameters of the XBP change over time and also what kind of similarities the X-ray emission has to a potential magnetic field model. Methods: Information from all three instruments on-board Hinode was used to study its entire evolution. XRT data was used to investigate the structure of the bright point and to measure the X-ray emission. The EIS instrument was used to measure various plasma parameters over the entire lifetime of the XBP. Lastly, the SOT was used to measure the magnetic field strength and provide a basis for potential field extrapolations of the photospheric fields to be made. These were performed and then compared to the observed coronal features. Results: The XBP measured ~15´´ in size and was found to be formed directly above an area of merging and cancelling magnetic flux on the photosphere. A good correlation between the rate of X-ray emission and decrease in total magnetic flux was found. The magnetic fragments of the XBP were found to vary on very short timescales (minutes), however the global quasi-bipolar structure remained throughout the lifetime of the XBP. The potential field extrapolations were a good visual fit to the observed coronal loops in most cases, meaning that the magnetic field was not too far from a potential state. Electron density measurements were obtained using a line ratio of Fe XII and the average density was found to be 4.95 × 109 cm-3 with the volumetric plasma filling factor calculated to have an average value of 0.04. Emission measure loci plots were then used to infer a steady temperature of log Te [ K] ~ 6.1. The calculated Fe XII Doppler shifts show velocity changes in and around the bright point of ±15 km s-1 which are observed to change

  14. Interactive PDF files with embedded 3D designs as support material to study the 32 crystallographic point groups

    NASA Astrophysics Data System (ADS)

    Arribas, Victor; Casas, Lluís; Estop, Eugènia; Labrador, Manuel

    2014-01-01

    Crystallography and X-ray diffraction techniques are essential topics in geosciences and other solid-state sciences. Their fundamentals, which include point symmetry groups, are taught in the corresponding university courses. In-depth meaningful learning of symmetry concepts is difficult and requires capacity for abstraction and spatial vision. Traditionally, wooden crystallographic models are used as support material. In this paper, we describe a new interactive tool, freely available, inspired in such models. Thirty-two PDF files containing embedded 3D models have been created. Each file illustrates a point symmetry group and can be used to teach/learn essential symmetry concepts and the International Hermann-Mauguin notation of point symmetry groups. Most interactive computer-aided tools devoted to symmetry deal with molecular symmetry and disregard crystal symmetry so we have developed a tool that fills the existing gap.

  15. Absence of Critical Points of Solutions to the Helmholtz Equation in 3D

    NASA Astrophysics Data System (ADS)

    Alberti, Giovanni S.

    2016-05-01

    The focus of this paper is to show the absence of critical points for the solutions to the Helmholtz equation in a bounded domain {Ωsubset{R}3} , given by div(a nabla u_{ω}g)-ω qu_{ω}g=0&quad {in Ω,} u_{ω}g=g&quad{on partialΩ.} We prove that for an admissible g there exists a finite set of frequencies K in a given interval and an open cover {overline{Ω}=\\cup_{ωin K} Ω_{ω}} such that {|nabla u_{ω}g(x)| > 0} for every {ωin K} and {xinΩ_{ω}} . The set K is explicitly constructed. If the spectrum of this problem is simple, which is true for a generic domain {Ω} , the admissibility condition on g is a generic property.

  16. Well log analysis to assist the interpretation of 3-D seismic data at Milne Point, north slope of Alaska

    USGS Publications Warehouse

    Lee, Myung W.

    2005-01-01

    In order to assess the resource potential of gas hydrate deposits in the North Slope of Alaska, 3-D seismic and well data at Milne Point were obtained from BP Exploration (Alaska), Inc. The well-log analysis has three primary purposes: (1) Estimate gas hydrate or gas saturations from the well logs; (2) predict P-wave velocity where there is no measured P-wave velocity in order to generate synthetic seismograms; and (3) edit P-wave velocities where degraded borehole conditions, such as washouts, affected the P-wave measurement significantly. Edited/predicted P-wave velocities were needed to map the gas-hydrate-bearing horizons in the complexly faulted upper part of 3-D seismic volume. The estimated gas-hydrate/gas saturations from the well logs were used to relate to seismic attributes in order to map regional distribution of gas hydrate inside the 3-D seismic grid. The P-wave velocities were predicted using the modified Biot-Gassmann theory, herein referred to as BGTL, with gas-hydrate saturations estimated from the resistivity logs, porosity, and clay volume content. The effect of gas on velocities was modeled using the classical Biot-Gassman theory (BGT) with parameters estimated from BGTL.

  17. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  18. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  19. The effect of load on torques in point-to-point arm movements: a 3D model.

    PubMed

    Tibold, Robert; Laczko, Jozsef

    2012-01-01

    A dynamic, 3-dimensional model was developed to simulate slightly restricted (pronation-supination was not allowed) point-to-point movements of the upper limb under different external loads, which were modeled using 3 objects of distinct masses held in the hand. The model considered structural and biomechanical properties of the arm and measured coordinates of joint positions. The model predicted muscle torques generated by muscles and needed to produce the measured rotations in the shoulder and elbow joints. The effect of different object masses on torque profiles, magnitudes, and directions were studied. Correlation analysis has shown that torque profiles in the shoulder and elbow joints are load invariant. The shape of the torque magnitude-time curve is load invariant but it is scaled with the mass of the load. Objects with larger masses are associated with a lower deflection of the elbow torque with respect to the sagittal plane. Torque direction-time curve is load invariant scaled with the mass of the load. The authors propose that the load invariance of the torque magnitude-time curve and torque direction-time curve holds for object transporting arm movements not restricted to a plane. PMID:22938084

  20. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  1. Investigating the usage of point spread functions in point source and microsphere localization

    NASA Astrophysics Data System (ADS)

    Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.

    2016-03-01

    Using a point spread function (PSF) to localize a point-like object, such as a fluorescent molecule or microsphere, represents a common task in single molecule microscopy image data analysis. The localization may differ in purpose depending on the application or experiment, but a unifying theme is the importance of being able to closely recover the true location of the point-like object with high accuracy. We present two simulation studies, both relating to the performance of object localization via the maximum likelihood fitting of a PSF to the object's image. In the first study, we investigate the integration of the PSF over an image pixel, which represents a critical part of the localization algorithm. Specifically, we explore how the fineness of the integration affects how well a point source can be localized, and find the use of too coarse a step size to produce location estimates that are far from the true location, especially when the images are acquired at relatively low magnifications. We also propose a method for selecting an appropriate step size. In the second study, we investigate the suitability of the common practice of using a PSF to localize a microsphere, despite the mismatch between the microsphere's image and the fitted PSF. Using criteria based on the standard errors of the mean and variance, we find the method suitable for microspheres up to 1 μm and 100 nm in diameter, when the localization is performed, respectively, with and without the simultaneous estimation of the width of the PSF.

  2. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images

    NASA Astrophysics Data System (ADS)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Vosselman, George

    2015-07-01

    Point clouds generated from airborne oblique images have become a suitable source for detailed building damage assessment after a disaster event, since they provide the essential geometric and radiometric features of both roof and façades of the building. However, they often contain gaps that result either from physical damage or from a range of image artefacts or data acquisition conditions. A clear understanding of those reasons, and accurate classification of gap-type, are critical for 3D geometry-based damage assessment. In this study, a methodology was developed to delineate buildings from a point cloud and classify the present gaps. The building delineation process was carried out by identifying and merging the roof segments of single buildings from the pre-segmented 3D point cloud. This approach detected 96% of the buildings from a point cloud generated using airborne oblique images. The gap detection and classification methods were tested using two other data sets obtained with Unmanned Aerial Vehicle (UAV) images with a ground resolution of around 1-2 cm. The methods detected all significant gaps and correctly identified the gaps due to damage. The gaps due to damage were identified based on the surrounding damage pattern, applying Gabor wavelets and a histogram of gradient orientation features. Two learning algorithms - SVM and Random Forests were tested for mapping the damaged regions based on radiometric descriptors. The learning model based on Gabor features with Random Forests performed best, identifying 95% of the damaged regions. The generalization performance of the supervised model, however, was less successful: quality measures decreased by around 15-30%.

  3. Polarization Aberrations in Astronomical Telescopes: The Point Spread Function

    NASA Astrophysics Data System (ADS)

    Breckinridge, James B.; Lam, Wai Sze T.; Chipman, Russell A.

    2015-05-01

    Detailed knowledge of the image of the point spread function (PSF) is necessary to optimize astronomical coronagraph masks and to understand potential sources of errors in astrometric measurements. The PSF for astronomical telescopes and instruments depends not only on geometric aberrations and scalar wave diffraction but also on those wavefront errors introduced by the physical optics and the polarization properties of reflecting and transmitting surfaces within the optical system. These vector wave aberrations, called polarization aberrations, result from two sources: (1) the mirror coatings necessary to make the highly reflecting mirror surfaces, and (2) the optical prescription with its inevitable non-normal incidence of rays on reflecting surfaces. The purpose of this article is to characterize the importance of polarization aberrations, to describe the analytical tools to calculate the PSF image, and to provide the background to understand how astronomical image data may be affected. To show the order of magnitude of the effects of polarization aberrations on astronomical images, a generic astronomical telescope configuration is analyzed here by modeling a fast Cassegrain telescope followed by a single 90° deviation fold mirror. All mirrors in this example use bare aluminum reflective coatings and the illumination wavelength is 800 nm. Our findings for this example telescope are: (1) The image plane irradiance distribution is the linear superposition of four PSF images: one for each of the two orthogonal polarizations and one for each of two cross-coupled polarization terms. (2) The PSF image is brighter by 9% for one polarization component compared to its orthogonal state. (3) The PSF images for two orthogonal linearly polarization components are shifted with respect to each other, causing the PSF image for unpolarized point sources to become slightly elongated (elliptical) with a centroid separation of about 0.6 mas. This is important for both astrometry

  4. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  5. Reconstruction, Quantification, and Visualization of Forest Canopy Based on 3d Triangulations of Airborne Laser Scanning Point Data

    NASA Astrophysics Data System (ADS)

    Vauhkonen, J.

    2015-03-01

    Reconstruction of three-dimensional (3D) forest canopy is described and quantified using airborne laser scanning (ALS) data with densities of 0.6-0.8 points m-2 and field measurements aggregated at resolutions of 400-900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty) space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i) to optimize the degree of filtration with respect to the field measurements, and (ii) to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2) with the stem volume considered, both alone (R2=0.65) and together with other predictors (R2=0.78). When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  6. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  7. The Complete (3-D) Co-Seismic Displacements Using Point-Like Targets Tracking With Ascending And Descending SAR Data

    NASA Astrophysics Data System (ADS)

    Hu, Xie; Wang, Teng; Liao, Mingsheng

    2013-12-01

    SAR Interferometry (InSAR) has its unique advantages, e.g., all weather/time accessibility, cm-level accuracy and large spatial coverage, however, it can only obtain one dimensional measurement along line-of-sight (LOS) direction. Offset tracking is an important complement to measure large and rapid displacements in both azimuth and range directions. Here we perform offset tracking on detected point-like targets (PT) by calculating the cross-correlation with a sinc-like template. And a complete 3-D displacement field can be derived using PT offset tracking from a pair of ascending and descending data. The presented case study on 2010 M7.2 El Mayor-Cucapah earthquake helps us better understand the rupture details.

  8. 3-D seismic over the Fausse Pointe Field: A case history of acquisition in a harsh environment

    SciTech Connect

    Duncan, P.M.; Nester, D.C.; Martin, J.A.; Moles, J.R.

    1995-12-31

    A 50 square mile 3D seismic survey was successfully acquired over Fausse Point Field in the latter half of 1994. The geophysical and logistical challenges of this project were immense. The steep dips and extensive range of target depths required a large shoot area with a relatively fine sampling interval. The surface, while essentially flat, included areas of cane field, crawfish ponds, thick brush, swamp, open lakes and deep canals -- all typical of southern Louisiana. Planning and permitting of the survey began in late 1993. Field operations began in June 1994 and were complete in January 1995. Field personnel numbered 150 at the peak of operations. More than 19,000 crew hours were required to complete the job at a cost of over 5,000,000. The project was complete on time and on budget. The resulting images of the salt dome and surrounding rocks are not only beautiful but are revealing many opportunities for new hydrocarbon development.

  9. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  10. Semi-automatic characterization of fractured rock masses using 3D point clouds: discontinuity orientation, spacing and SMR geomechanical classification

    NASA Astrophysics Data System (ADS)

    Riquelme, Adrian; Tomas, Roberto; Abellan, Antonio; Cano, Miguel; Jaboyedoff, Michel

    2015-04-01

    Investigation of fractured rock masses for different geological applications (e.g. fractured reservoir exploitation, rock slope instability, rock engineering, etc.) requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in 3D data acquisition using photogrammetric and/or LiDAR techniques currently allow a quick and an accurate characterization of rock mass discontinuities. This contribution presents a methodology for: (a) use of 3D point clouds for the identification and analysis of planar surfaces outcropping in a rocky slope; (b) calculation of the spacing between different discontinuity sets; (c) semi-automatic calculation of the parameters that play a capital role in the Slope Mass Rating geomechanical classification. As for the part a) (discontinuity orientation), our proposal identifies and defines the algebraic equations of the different discontinuity sets of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test. Additionally, the procedure finds principal orientations by Kernel Density Estimation and identifies clusters (Riquelme et al., 2014). As a result of this analysis, each point is classified with a discontinuity set and with an outcrop plane (cluster). Regarding the part b) (discontinuity spacing) our proposal utilises the previously classified point cloud to investigate how different outcropping planes are linked in space. Discontinuity spacing is calculated for each pair of linked clusters within the same discontinuity set, and then spacing values are analysed calculating their statistic values. Finally, as for the part c) the previous results are used to calculate parameters F_1, F2 and F3 of the Slope Mass Rating geomechanical classification. This analysis is carried out for each discontinuity set using their respective orientation extracted in part a). The open access tool SMRTool (Riquelme et al., 2014) is then used to calculate F1 to F3 correction

  11. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency. PMID:27408832

  12. Crustal thickness from 3D MCS data collected over the fast-spreading East Pacific Rise at 9°50'N

    NASA Astrophysics Data System (ADS)

    Aghaei, O.; Nedimović, M. R.; Canales, J.; Carton, H. D.; Carbotte, S. M.; Mutter, J. C.

    2011-12-01

    We compute, analyze and present crustal thickness variations for a section of the fast-spreading East Pacific Rise (EPR). The area of 3D coverage is between 9°38'N and 9°58' N (~1000 km2), where the documented eruptions of 1990-91 and 2005-06 occurred. The crustal thickness is computed by depth converting the two-way reflection travel times from the seafloor to the Moho. The seafloor and Moho reflections are picked on the migrated stack volume produced from the 3D multichannel seismic (MCS) data collected on R/V Marcus G. Langseth in summer of 2008 during cruise MGL0812. The crustal velocities used for depth conversion were computed by Canales et al. (2003; 2011) by simultaneous inversion of seismic refractions and wide-angle Moho reflection traveltimes from four ridge-parallel and one ridge-perpendicular ocean bottom seismometer (OBS) profile for which data were collected during the 1998 UNDERSHOOT experiment. The MCS data analysis included 1D and 2D filtering, offset-dependent spherical divergence correction, surface-consistent amplitude correction, common midpoint (CMP) sort with flex binning, velocity analysis, normal moveout, and CMP stretch mute. The poststack processing includes seafloor multiple mute and 3D Kirchhoff poststack time migration. Here we use the crustal thickness and Moho seismic signature variations to detail their relationship with ridge segmentation, crustal age, bathymetry, and on- and off-axis magmatism. On the western flank (Pacific plate) from 9°41' to 9°48', the Moho reflection is strong. From 9°48' to 9°52', the Moho reflection varies from moderate to weak and disappears from ~3 km to ~9 km from the ridge axis. On the eastern flank (Cocos plate) from 9°41' to 9°51', the Moho reflection varies from strong to moderate. From 9°51' to 9°54' the Moho reflection varies from moderate to weak and disappears beneath a region ~3 km to ~9 km from the axis. On the Cocos plate, across-axis crustal thickness variations (5.5-6.2 km) show a

  13. Point spread function of the optical needle super-oscillatory lens

    SciTech Connect

    Roy, Tapashree; Rogers, Edward T. F.; Yuan, Guanghui; Zheludev, Nikolay I.

    2014-06-09

    Super-oscillatory optical lenses are known to achieve sub-wavelength focusing. In this paper, we analyse the imaging capabilities of a super-oscillatory lens by studying its point spread function. We experimentally demonstrate that a super-oscillatory lens can generate a point spread function 24% smaller than that dictated by the diffraction limit and has an effective numerical aperture of 1.31 in air. The object-image linear displacement property of these lenses is also investigated.

  14. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  15. Thick fibrous composite reinforcements behave as special second-gradient materials: three-point bending of 3D interlocks

    NASA Astrophysics Data System (ADS)

    Madeo, Angela; Ferretti, Manuel; dell'Isola, Francesco; Boisse, Philippe

    2015-08-01

    In this paper, we propose to use a second gradient, 3D orthotropic model for the characterization of the mechanical behavior of thick woven composite interlocks. Such second-gradient theory is seen to directly account for the out-of-plane bending rigidity of the yarns at the mesoscopic scale which is, in turn, related to the bending stiffness of the fibers composing the yarns themselves. The yarns' bending rigidity evidently affects the macroscopic bending of the material and this fact is revealed by presenting a three-point bending test on specimens of composite interlocks. These specimens differ one from the other for the different relative direction of the yarns with respect to the edges of the sample itself. Both types of specimens are independently seen to take advantage of a second-gradient modeling for the correct description of their macroscopic bending modes. The results presented in this paper are essential for the setting up of a correct continuum framework suitable for the mechanical characterization of composite interlocks. The few second-gradient parameters introduced by the present model are all seen to be associated with peculiar deformation modes of the mesostructure (bending of the yarns) and are determined by inverse approach. Although the presented results undoubtedly represent an important step toward the complete characterization of the mechanical behavior of fibrous composite reinforcements, more complex hyperelastic second-gradient constitutive laws must be conceived in order to account for the description of all possible mesostructure-induced deformation patterns.

  16. Iso-sciatic point: novel approach to distinguish shadowing 3-D mask effects from scanner aberrations in extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Leunissen, Leonardus H. A.; Gronheid, Roel; Gao, Weimin

    2006-06-01

    Extreme ultraviolet lithography (EUVL) uses a reflective mask with a multilayer coating. Therefore, the illumination is an off-axis ring field system that is non-telecentric on the mask side. This non-zero angle of incidence combined with the three-dimensional mask topography results in the so-called "shadowing effect". The shadowing causes the printed CD to depend on the orientation as well as on the position in the slit and it will significantly influence the image formation [1,2]. In addition, simulations show that the Bossung curves are asymmetrical due to 3-D mask effects and their best focus depends on the shadowing angle [3]. Such tilts in the Bossung curves are usually associated with aberrations in the optical system. In this paper, we describe an approach in which both properties can be disentangled. Bossung curve simulations with varying effective angles of incidence (between 0 and 6 degrees) show that at discrete defocus offsets, the printed linewidth is independent of the incident angle (and thus independent of the shadowing effect), the so-called iso-sciatic (constant shadowing) point. For an ideal optical system this means that the size of a printed feature with a given mask-CD and orientation does not change through slit. With a suitable test structure it is possible to use this effect to distinguish between mask topography and imaging effects from aberrations through slit. Simulations for the following aberrations tested the approach: spherical, coma and astigmatism.

  17. Generating synthetic 3D density fluctuation data to verify two-point measurement of parallel correlation length

    NASA Astrophysics Data System (ADS)

    Kim, Jaewook; Ghim, Young-Chul; Nuclear Fusion and Plasma Lab Team

    2014-10-01

    A BES (beam emission spectroscopy) system and an MIR (Microwave Imaging Reflectometer) system installed in KSTAR measure 2D (radial and poloidal) density fluctuations at two different toroidal locations. This gives a possibility of measuring the parallel correlation length of ion-scale turbulence in KSTAR. Due to lack of measurement points in toroidal direction and shorter separation distance between the diagnostics compared to an expected parallel correlation length, it is necessary to confirm whether a conventional statistical method, i.e., using a cross-correlation function, is valid for measuring the parallel correlation length. For this reason, we generated synthetic 3D density fluctuation data following Gaussian random field in a toroidal coordinate system that mimic real density fluctuation data. We measure the correlation length of the synthetic data by fitting a Gaussian function to the cross-correlation function. We observe that there is disagreement between the measured and actual correlation lengths, and the degree of disagreement is a function of at least, correlation length, correlation time and advection velocity of synthetic data. We identify the cause of disagreement and propose an appropriate method to measure correct correlation length.

  18. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  19. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry. PMID:24822422

  20. Evaluation of centricity of optical elements by using a point spread function

    SciTech Connect

    Miks, Antonin; Novak, Jiri; Novak, Pavel

    2008-06-20

    Our work describes a technique for testing the centricity of optical systems by using the point spread function. It is shown that a specific position of an axial object point can be found for every optical element, where the spherical aberration is either zero or minimal. If we image such a point with an optical element, then its point spread function will be almost identical to the point spread function of the diffraction-limited optical system. This consequence can be used for testing the centricity of precisely fabricated optical elements, because we can simply detect asymmetry of the point spread function, which is caused by the decentricity of the tested optical element. One can also use this method for testing optical elements in connection with a cementing process. Moreover, a simple formula is also derived for calculation of the coefficient of third-order coma, which is caused by the decentricity of the optical surface due to a tilt of the surface with respect to the optical axis, and a simple method for detecting the asymmetry of the point spread function is proposed.

  1. Automatic reconstruction of 3D urban landscape by computing connected regions and assigning them an average altitude from LiDAR point cloud image

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2014-10-01

    The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.

  2. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  3. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function. PMID:20096049

  4. Hubble Space Telescope Faint Object Camera calculated point-spread functions.

    PubMed

    Lyon, R G; Dorband, J E; Hollis, J M

    1997-03-10

    A set of observed noisy Hubble Space Telescope Faint Object Camera point-spread functions is used to recover the combined Hubble and Faint Object Camera wave-front error. The low-spatial-frequency wave-front error is parameterized in terms of a set of 32 annular Zernike polynomials. The midlevel and higher spatial frequencies are parameterized in terms of set of 891 polar-Fourier polynomials. The parameterized wave-front error is used to generate accurate calculated point-spread functions, both pre- and post-COSTAR (corrective optics space telescope axial replacement), suitable for image restoration at arbitrary wavelengths. We describe the phase-retrieval-based recovery process and the phase parameterization. Resultant calculated precorrection and postcorrection point-spread functions are shown along with an estimate of both pre- and post-COSTAR spherical aberration. PMID:18250862

  5. Scattering and the Point Spread Function of the New Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Schreur, Julian J.

    1996-01-01

    Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called

  6. Electronic and magnetic structure of 3d-transition-metal point defects in silicon calculated from first principles

    NASA Astrophysics Data System (ADS)

    Beeler, F.; Andersen, O. K.; Scheffler, M.

    1990-01-01

    We describe spin-unrestricted self-consistent linear muffin-tin-orbital (LMTO) Green-function calculations for Sc, Ti, V, Cr, Mn, Fe, Co, Ni, and Cu transition-metal impurities in crystalline silicon. Both defect sites of tetrahedral symmetry are considered. All possible charge states with their spin multiplicities, magnetization densities, and energy levels are discussed and explained with a simple physical picture. The early transition-metal interstitial and late transition-metal substitutional 3d ions are found to have low spin. This is in conflict with the generally accepted crystal-field model of Ludwig and Woodbury, but not with available experimental data. For the interstitial 3d ions, the calculated deep donor and acceptor levels reproduce all experimentally observed transitions. For substitutional 3d ions, a large number of predictions is offered to be tested by future experimental studies.

  7. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  8. Visualization of Buffer Capacity with 3-D "Topo" Surfaces: Buffer Ridges, Equivalence Point Canyons and Dilution Ramps

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul

    2016-01-01

    BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…

  9. The point spread function of the soft X-ray telescope aboard Yohkoh

    NASA Technical Reports Server (NTRS)

    Martens, Petrus C.; Acton, Loren W.; Lemen, James R.

    1995-01-01

    The point spread function of the SXT telescope aboard Yohkoh has been measured in flight configuration in three different X-ray lines at White Sands Missile Range. We have fitted these data with an elliptical generalization of the Moffat function. Our fitting method consists of chi squared minimization in Fourier space, especially designed for matching of sharply peaked functions. We find excellent fits with a reduced chi squared of order unity or less for single exposure point spread functions over most of the CCD. Near the edges of the CCD the fits are less accurate due to vignetting. From fitting results with summation of multiple exposures we find a systematic error in the fitting function of the order of 3% near the peak of the point spread function, which is close to the photon noise for typical SXT images in orbit. We find that the full width to half maximum and fitting parameters vary significantly with CCD location. However, we also find that point spread functions measured at the same location are consistent to one another within the limit determined by photon noise. A 'best' analytical fit to the PSF as function of position on the CCD is derived for use in SXT image enhancemnent routines. As an aside result we have found that SXT can determine the location of point sources to about a quarter of a 2.54 arc sec pixel.

  10. Successful gas hydrate prospecting using 3D seismic - A case study for the Mt. Elbert prospect, Milne Point, North Slope Alaska

    USGS Publications Warehouse

    Inks, T.L.; Agena, W.F.

    2008-01-01

    In February 2007, the Mt. Elbert Prospect stratigraphic test well, Milne Point, North Slope Alaska encountered thick methane gas hydrate intervals, as predicted by 3D seismic interpretation and modeling. Methane gas hydrate-saturated sediment was found in two intervals, totaling more than 100 ft., identified and mapped based on seismic character and wavelet modeling.

  11. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    PubMed

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction. PMID:26154937

  12. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  13. Registration of overlapping 3D point clouds using extracted line segments. (Polish Title: Rejestracja chmur punktów 3D w oparciu o wyodrębnione krawędzie)

    NASA Astrophysics Data System (ADS)

    Poręba, M.; Goulette, F.

    2014-12-01

    The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing and documenting the surrounding environment. This procedure involves three main stages: 1) choosing appropriate features, which can be reliably extracted; 2) matching conjugate primitives; 3) estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP) algorithm or its variation. Alternatively, planar and linear feature-based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low-density point clouds. Working with synthetic data, several existing line-based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since an automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence-finding algorithm is developed. The tests show that our matching algorithm provides a correct p airing with an accuracy of 99 % at least, and about 8% of omitted line pairs.

  14. Individual 3D region-of-interest atlas of the human brain: neural-network-based tissue classification with automatic training point extraction

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-06-01

    The purpose of individual 3D region-of-interest atlas extraction is to automatically define anatomically meaningful regions in 3D MRI images for quantification of functional parameters (PET, SPECT: rMRGlu, rCBF). The first step of atlas extraction is to automatically classify brain tissue types into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB) and background (BG). A feed-forward neural network with back-propagation training algorithm is used and compared to other numerical classifiers. It can be trained by a sample from the individual patient data set in question. Classification is done by a 'winner takes all' decision. Automatic extraction of a user-specified number of training points is done in a cross-sectional slice. Background separation is done by simple region growing. The most homogeneous voxels define the region for WM training point extraction (TPE). Non-white-matter and nonbackground regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is one feature. For each class, spatially uniformly distributed training points are extracted by a random generator from these regions. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated. The resulting class images can be analyzed for extraction of anatomical ROIs.

  15. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  16. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  17. Visualization of molecular fluorescence point spread functions via remote excitation switching fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Su, Liang; Lu, Gang; Kenens, Bart; Rocha, Susana; Fron, Eduard; Yuan, Haifeng; Chen, Chang; van Dorpe, Pol; Roeffaers, Maarten B. J.; Mizuno, Hideaki; Hofkens, Johan; Hutchison, James A.; Uji-I, Hiroshi

    2015-02-01

    The enhancement of molecular absorption, emission and scattering processes by coupling to surface plasmon polaritons on metallic nanoparticles is a key issue in plasmonics for applications in (bio)chemical sensing, light harvesting and photocatalysis. Nevertheless, the point spread functions for single-molecule emission near metallic nanoparticles remain difficult to characterize due to fluorophore photodegradation, background emission and scattering from the plasmonic structure. Here we overcome this problem by exciting fluorophores remotely using plasmons propagating along metallic nanowires. The experiments reveal a complex array of single-molecule fluorescence point spread functions that depend not only on nanowire dimensions but also on the position and orientation of the molecular transition dipole. This work has consequences for both single-molecule regime-sensing and super-resolution imaging involving metallic nanoparticles and opens the possibilities for fast size sorting of metallic nanoparticles, and for predicting molecular orientation and binding position on metallic nanoparticles via far-field optical imaging.

  18. STRONG GRAVITATIONAL LENS MODELING WITH SPATIALLY VARIANT POINT-SPREAD FUNCTIONS

    SciTech Connect

    Rogers, Adam; Fiege, Jason D.

    2011-12-10

    Astronomical instruments generally possess spatially variant point-spread functions, which determine the amount by which an image pixel is blurred as a function of position. Several techniques have been devised to handle this variability in the context of the standard image deconvolution problem. We have developed an iterative gravitational lens modeling code called Mirage that determines the parameters of pixelated source intensity distributions for a given lens model. We are able to include the effects of spatially variant point-spread functions using the iterative procedures in this lensing code. In this paper, we discuss the methods to include spatially variant blurring effects and test the results of the algorithm in the context of gravitational lens modeling problems.

  19. Strong Gravitational Lens Modeling with Spatially Variant Point-spread Functions

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Fiege, Jason D.

    2011-12-01

    Astronomical instruments generally possess spatially variant point-spread functions, which determine the amount by which an image pixel is blurred as a function of position. Several techniques have been devised to handle this variability in the context of the standard image deconvolution problem. We have developed an iterative gravitational lens modeling code called Mirage that determines the parameters of pixelated source intensity distributions for a given lens model. We are able to include the effects of spatially variant point-spread functions using the iterative procedures in this lensing code. In this paper, we discuss the methods to include spatially variant blurring effects and test the results of the algorithm in the context of gravitational lens modeling problems.

  20. Uav-Based Acquisition of 3d Point Cloud - a Comparison of a Low-Cost Laser Scanner and Sfm-Tools

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Maas, H.-G.

    2015-08-01

    The Project ADFEX (Adaptive Federative 3D Exploration of Multi Robot System) pursues the goal to develop a time- and cost-efficient system for exploration and monitoring task of unknown areas or buildings. A fleet of unmanned aerial vehicles equipped with appropriate sensors (laser scanner, RGB camera, near infrared camera, thermal camera) were designed and built. A typical operational scenario may include the exploration of the object or area of investigation by an UAV equipped with a laser scanning range finder to generate a rough point cloud in real time to provide an overview of the object on a ground station as well as an obstacle map. The data about the object enables the path planning for the robot fleet. Subsequently, the object will be captured by a RGB camera mounted on the second flying robot for the generation of a dense and accurate 3D point cloud by using of structure from motion techniques. In addition, the detailed image data serves as basis for a visual damage detection on the investigated building. This paper focuses on our experience with use of a low-cost light-weight Hokuyo laser scanner onboard an UAV. The hardware components for laser scanner based 3D point cloud acquisition are discussed, problems are demonstrated and analyzed, and a quantitative analysis of the accuracy potential is shown as well as in comparison with structure from motion-tools presented.

  1. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  2. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  3. A New Stochastic Modeling of 3-D Mud Drapes Inside Point Bar Sands in Meandering River Deposits

    SciTech Connect

    Yin, Yanshu

    2013-12-15

    The environment of major sediments of eastern China oilfields is a meandering river where mud drapes inside point bar sand occur and are recognized as important factors for underground fluid flow and distribution of the remaining oil. The present detailed architectural analysis, and the related mud drapes' modeling inside a point bar, is practical work to enhance oil recovery. This paper illustrates a new stochastic modeling of mud drapes inside point bars. The method is a hierarchical strategy and composed of three nested steps. Firstly, the model of meandering channel bodies is established using the Fluvsim method. Each channel centerline obtained from the Fluvsim is preserved for the next simulation. Secondly, the curvature ratios of each meandering river at various positions are calculated to determine the occurrence of each point bar. The abandoned channel is used to characterize the geometry of each defined point bar. Finally, mud drapes inside each point bar are predicted through random sampling of various parameters, such as number, horizontal intervals, dip angle, and extended distance of mud drapes. A dataset, collected from a reservoir in the Shengli oilfield of China, was used to illustrate the mud drapes' building procedure proposed in this paper. The results show that the inner architectural elements of the meandering river are depicted fairly well in the model. More importantly, the high prediction precision from the cross validation of five drilled wells shows the practical value and significance of the proposed method.

  4. Correlation of Point B and Lymph Node Dose in 3D-Planned High-Dose-Rate Cervical Cancer Brachytherapy

    SciTech Connect

    Lee, Larissa J.; Sadow, Cheryl A.; Russell, Anthony; Viswanathan, Akila N.

    2009-11-01

    Purpose: To compare high dose rate (HDR) point B to pelvic lymph node dose using three-dimensional-planned brachytherapy for cervical cancer. Methods and Materials: Patients with FIGO Stage IB-IIIB cervical cancer received 70 tandem HDR applications using CT-based treatment planning. The obturator, external, and internal iliac lymph nodes (LN) were contoured. Per fraction (PF) and combined fraction (CF) right (R), left (L), and bilateral (Bil) nodal doses were analyzed. Point B dose was compared with LN dose-volume histogram (DVH) parameters by paired t test and Pearson correlation coefficients. Results: Mean PF and CF doses to point B were R 1.40 Gy +- 0.14 (CF: 7 Gy), L 1.43 +- 0.15 (CF: 7.15 Gy), and Bil 1.41 +- 0.15 (CF: 7.05 Gy). The correlation coefficients between point B and the D100, D90, D50, D2cc, D1cc, and D0.1cc LN were all less than 0.7. Only the D2cc to the obturator and the D0.1cc to the external iliac nodes were not significantly different from the point B dose. Significant differences between R and L nodal DVHs were seen, likely related to tandem deviation from irregular tumor anatomy. Conclusions: With HDR brachytherapy for cervical cancer, per fraction nodal dose approximates a dose equivalent to teletherapy. Point B is a poor surrogate for dose to specific nodal groups. Three-dimensional defined nodal contours during brachytherapy provide a more accurate reflection of delivered dose and should be part of comprehensive planning of the total dose to the pelvic nodes, particularly when there is evidence of pathologic involvement.

  5. A new methodology in fast and accurate matching of the 2D and 3D point clouds extracted by laser scanner systems

    NASA Astrophysics Data System (ADS)

    Torabi, M.; Mousavi G., S. M.; Younesian, D.

    2015-03-01

    Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.

  6. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    SciTech Connect

    Nasehi Tehrani, J; Wang, J; Guo, X; Yang, Y

    2014-06-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.

  7. Relationship between ridge segmentation and Moho transition zone structure from 3D multichannel seismic data collected over the fast-spreading East Pacific Rise at 9°50'N

    NASA Astrophysics Data System (ADS)

    Aghaei, O.; Nedimovic, M. R.; Canales, J.; Carton, H. D.; Carbotte, S. M.; Mutter, J. C.

    2010-12-01

    We present stack and migrated stack volumes of a fast-spreading center produced from the high-resolution 3D multichannel seismic (MCS) data collected in summer of 2008 over the East Pacific Rise (EPR) at 9°50’N during cruise MGL0812. These volumes give us new insight into the 3D structure of the lower crust and Moho Transition Zone (MTZ) along and across the ridge axis, and how this structure relates to the ridge segmentation at the spreading axis. The area of 3D coverage is between 9°38’N and 9°58’N (~1000 km2) where the documented eruptions of 1990-91 and 2005-06 occurred. This high-resolution survey has a nominal bin size of 6.25 m in cross-axis direction and 37.5 m in along-axis direction. The prestack processing sequence applied to data includes 1D and 2D filtering to remove low-frequency cable noise, offset-dependent spherical divergence correction to compensate for geometrical spreading, surface-consistent amplitude correction to balance abnormally high/low shot and channel amplitudes, trace editing, velocity analysis, normal moveout (NMO), and CMP mute of stretched far offset arrivals. The poststack processing includes seafloor multiple mute to reduce migration noise and poststack time migration. We also will apply primary multiple removal and prestack time migration to the data and compare the results to the migrated stack volume. The poststack and prestack migrated volumes will then be used to detail Moho seismic signature variations and their relationship to ridge segmentation, crustal age, bathymetry, and magmatism. We anticipate that the results will also provide insight into the mantle upwelling pattern, which is actively debated for the study area.

  8. Evaluation of the Quantitative Accuracy of 3D Reconstruction of Edentulous Jaw Models with Jaw Relation Based on Reference Point System Alignment

    PubMed Central

    Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133

  9. Dynamic topology and flux rope evolution during non-linear tearing of 3D null point current sheets

    SciTech Connect

    Wyper, P. F. Pontin, D. I.

    2014-10-15

    In this work, the dynamic magnetic field within a tearing-unstable three-dimensional current sheet about a magnetic null point is described in detail. We focus on the evolution of the magnetic null points and flux ropes that are formed during the tearing process. Generally, we find that both magnetic structures are created prolifically within the layer and are non-trivially related. We examine how nulls are created and annihilated during bifurcation processes, and describe how they evolve within the current layer. The type of null bifurcation first observed is associated with the formation of pairs of flux ropes within the current layer. We also find that new nulls form within these flux ropes, both following internal reconnection and as adjacent flux ropes interact. The flux ropes exhibit a complex evolution, driven by a combination of ideal kinking and their interaction with the outflow jets from the main layer. The finite size of the unstable layer also allows us to consider the wider effects of flux rope generation. We find that the unstable current layer acts as a source of torsional magnetohydrodynamic waves and dynamic braiding of magnetic fields. The implications of these results to several areas of heliophysics are discussed.

  10. Measurement of Phased Array Point Spread Functions for Use with Beamforming

    NASA Technical Reports Server (NTRS)

    Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis

    2011-01-01

    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.

  11. Documenting a Complex Modern Heritage Building Using Multi Image Close Range Photogrammetry and 3d Laser Scanned Point Clouds

    NASA Astrophysics Data System (ADS)

    Vianna Baptista, M. L.

    2013-07-01

    Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers' intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry) were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.

  12. Disentangling the history of complex multi-phased shell beds based on the analysis of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2015-04-01

    Shell beds are key features in sedimentary records throughout the Phanerozoic. The interplay between burial rates and population productivity is reflected in distinct degrees of shelliness. Consequently, shell beds may provide informations on various physical processes, which led to the accumulation and preservation of hard parts. Many shell beds pass through a complex history of formation being shaped by more than one factor. In shallow marine settings, the composition of shell beds is often strongly influenced by winnowing, reworking and transport. These processes may cause considerable time averaging and the accumulation of specimens, which have lived thousands of years apart. In the best case, the environment remained stable during that time span and the mixing does not mask the overall composition. A major obstacle for the interpretation of shell beds, however, is the amalgamation of shell beds of several depositional units in a single concentration, as typically for tempestites and tsunamites. Disentangling such mixed assemblages requires deep understanding of the ecological requirements of the taxa involved - which is achievable for geologically young shell beds with living relatives - and a statistic approach to quantify the contribution by the various death assemblages. Furthermore it requires understanding of sedimentary processes potentially involved into their formation. Here we present the first attempt to describe and decipher such a multi-phase shell-bed based on a high resolution digital surface model (1 mm) combined with ortho-photos with a resolution of 0.5 mm per pixel. Documenting the oyster reef requires precisely georeferenced data; owing to high redundancy of the point cloud an accuracy of a few mm was achieved. The shell accumulation covers an area of 400 m2 with thousands of specimens, which were excavated by a three months campaign at Stetten in Lower Austria. Formed in an Early Miocene estuary of the Paratethys Sea it is mainly composed

  13. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  14. 3D Visualization of the Temporal and Spatial Spread of Tau Pathology Reveals Extensive Sites of Tau Accumulation Associated with Neuronal Loss and Recognition Memory Deficit in Aged Tau Transgenic Mice

    PubMed Central

    Fu, Hongjun; Hussaini, S. Abid; Wegmann, Susanne; Profaci, Caterina; Daniels, Jacob D.; Herman, Mathieu; Emrani, Sheina; Figueroa, Helen Y.; Hyman, Bradley T.; Davies, Peter; Duff, Karen E.

    2016-01-01

    3D volume imaging using iDISCO+ was applied to observe the spatial and temporal progression of tau pathology in deep structures of the brain of a mouse model that recapitulates the earliest stages of Alzheimer’s disease (AD). Tau pathology was compared at four timepoints, up to 34 months as it spread through the hippocampal formation and out into the neocortex along an anatomically connected route. Tau pathology was associated with significant gliosis. No evidence for uptake and accumulation of tau by glia was observed. Neuronal cells did appear to have internalized tau, including in extrahippocampal areas as a small proportion of cells that had accumulated human tau protein did not express detectible levels of human tau mRNA. At the oldest timepoint, mature tau pathology in the entorhinal cortex (EC) was associated with significant cell loss. As in human AD, mature tau pathology in the EC and the presence of tau pathology in the neocortex correlated with cognitive impairment. 3D volume imaging is an ideal technique to easily monitor the spread of pathology over time in models of disease progression. PMID:27466814

  15. 3D Visualization of the Temporal and Spatial Spread of Tau Pathology Reveals Extensive Sites of Tau Accumulation Associated with Neuronal Loss and Recognition Memory Deficit in Aged Tau Transgenic Mice.

    PubMed

    Fu, Hongjun; Hussaini, S Abid; Wegmann, Susanne; Profaci, Caterina; Daniels, Jacob D; Herman, Mathieu; Emrani, Sheina; Figueroa, Helen Y; Hyman, Bradley T; Davies, Peter; Duff, Karen E

    2016-01-01

    3D volume imaging using iDISCO+ was applied to observe the spatial and temporal progression of tau pathology in deep structures of the brain of a mouse model that recapitulates the earliest stages of Alzheimer's disease (AD). Tau pathology was compared at four timepoints, up to 34 months as it spread through the hippocampal formation and out into the neocortex along an anatomically connected route. Tau pathology was associated with significant gliosis. No evidence for uptake and accumulation of tau by glia was observed. Neuronal cells did appear to have internalized tau, including in extrahippocampal areas as a small proportion of cells that had accumulated human tau protein did not express detectible levels of human tau mRNA. At the oldest timepoint, mature tau pathology in the entorhinal cortex (EC) was associated with significant cell loss. As in human AD, mature tau pathology in the EC and the presence of tau pathology in the neocortex correlated with cognitive impairment. 3D volume imaging is an ideal technique to easily monitor the spread of pathology over time in models of disease progression. PMID:27466814

  16. The effects of the atmospheric point-spread 'seeing' function on spatially resolved spectra of Jupiter

    NASA Technical Reports Server (NTRS)

    Gelfand, J.; Cochran, W. D.; Smith, W. H.

    1977-01-01

    We present the results of an analysis of the effects of atmospheric seeing and of instrumental spectral and spatial resolution on the observed variation of absorption-line profiles across the disk of Jupiter. The technique described may be applied equally well to the analysis of observations of any extended astronomical source. These results show the necessity of obtaining accurate point-spread-function information during the course of observations of this nature. We also point out that in order to avoid the uncertainties and ambiguities inherent in attempts at deconvolution of observational data, one must properly convolve the appropriate spatial and spectral resolution functions with the models being tested and then compare the results with the observational data.

  17. Effects of point-spread function on calibration and radiometric accuracy of CCD camera.

    PubMed

    Du, Hong; Voss, Kenneth J

    2004-01-20

    The point-spread function (PSF) of a camera can seriously affect the accuracy of radiometric calibration and measurement. We found that the PSF can produce a 3.7% difference between the apparent measured radiance of two plaques of different sizes with the same illumination. This difference can be removed by deconvolution with the measured PSF. To determine the PSF, many images of a collimated beam from a He-Ne laser are averaged. Since our optical system is focused at infinity, it should focus this source to a single pixel. Although the measured PSF is very sharp, dropping 4 and 6 orders of magnitude and 8 and 100 pixels away from the point source, respectively, we show that the effect of the PSF as far as 100 pixels away cannot be ignored without introducing an appreciable error to the calibration. We believe that the PSF should be taken into account in all optical systems to obtain accurate radiometric measurements. PMID:14765928

  18. Point spread function modeling method for x-ray flat panel detector imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Shi, Yikai; Huang, Kuidong; Yu, Qingchao

    2012-10-01

    Flat panel detector (FPD) has been widely used as the imaging unit in the current X-ray digital radiography (DR) systems and Computed Tomography (CT) systems. Point spread function (PSF) is an important indicator of the FPD imaging system, but also the basis for image restoration. For the problem of poor accuracy of the FPD's PSF measurement with the original pinhole imaging for DR systems, a new PSF measuring method with the pinhole imaging based on the image restoration is proposed in this paper. Firstly, some images collected with the pinhole imaging are averaged to one image to reducing the noise. Then, the original pinhole image is calculated according to the energy conservation principle of point spread. Finally, the PSF of the FPD is obtained using the operation of image restoration. On this basis, through the fitting of the characteristic parameters of the PSF on different scan conditions, the computational model of the PSF is established for any scan conditions. Experimental results show that the method can obtain a more accurate PSF of the FPD, and the PSF of the same system under any scan conditions can be directly calculated with the PSF model.

  19. Comparison and validation of point spread models for imaging in natural waters.

    PubMed

    Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A

    2008-06-23

    It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images. PMID:18575566

  20. The Effects of Instrumental Elliptical Polarization on Stellar Point Spread Function Fine Structure

    NASA Technical Reports Server (NTRS)

    Carson, Joseph C.; Kern, Brian D.; Breckinridge, James B.; Trauger, John T.

    2005-01-01

    We present procedures and preliminary results from a study on the effects of instrumental polarization on the fine structure of the stellar point spread function (PSF). These effects are important to understand because the the aberration caused by instrumental polarization on an otherwise diffraction-limited will likely have have severe consequences for extreme high contrast imaging systems such as NASA's planned Terrestrial Planet Finder (TPF) mission and the proposed NASA Eclipse mission. The report here, describing our efforts to examine these effects, includes two parts: 1) a numerical analysis of the effect of metallic reflection, with some polarization-specific retardation, on a spherical wavefront; 2) an experimental approach for observing this effect, along with some preliminary laboratory results. While the experimental phase of this study requires more fine-tuning to produce meaningful results, the numerical analysis indicates that the inclusion of polarization-specific phase effects (retardation) results in a point spread function (PSF) aberration more severe than the amplitude (reflectivity) effects previously recorded in the literature.

  1. A point pattern model of the spread of foot-and-mouth disease.

    PubMed

    Gerbier, G; Bacro, J N; Pouillot, R; Durand, B; Moutou, F; Chadoeuf, J

    2002-11-29

    The spatial spread of foot-and-mouth disease (FMD) is influenced by several sources of spatial heterogeneity: heterogeneity of the exposure to the virus, heterogeneity of the animal density and heterogeneity of the networks formed by the contacts between farms. A discrete space model assuming that farms can be reduced to points is proposed to handle these different factors. The farm-to-farm process of transmission of the infection is studied using point-pattern methodology. Farm management, commercial exchanges, possible airborne transmission, etc. cannot be explicitly taken into account because of lack of data. These latter factors are introduced via surrogate variables such as herd size and distance between farms. The model is built on the calculation of an infectious potential for each farm. This method has been applied to the study of the 1967-1968 FMD epidemic in UK and allowed us to evaluate the spatial variation of the probability of infection during this epidemic. Maximum likelihood estimation has been conducted conditional on the absence of data concerning the farms which were not infected during the epidemic. Model parameters have then been tested using an approximated conditional-likelihood ratio test. In this case study, results and validation are limited by the lack of data, but this model can easily be extended to include other information such as the effect of wind direction and velocity on airborne spread of the virus or the complex interactions between the locations of farms and the herd size. It can also be applied to other diseases where point approximation is convenient. In the context of an increase of animal density in some areas, the model explicitly incorporates the density and known epidemiological characteristics (e.g. incubation period) in the calculation of the probability of FMD infection. Control measures such as vaccination or slaughter can be simply introduced, respectively, as a reduction of the susceptible population or as a

  2. Updated point spread function simulations for JWST with WebbPSF

    NASA Astrophysics Data System (ADS)

    Perrin, Marshall D.; Sivaramakrishnan, Anand; Lajoie, Charles-Philippe; Elliott, Erin; Pueyo, Laurent; Ravindranath, Swara; Albert, Loïc.

    2014-08-01

    Accurate models of optical performance are an essential tool for astronomers, both for planning scientific observations ahead of time, and for a wide range of data analysis tasks such as point-spread-function (PSF)-fitting photometry and astrometry, deconvolution, and PSF subtraction. For the James Webb Space Telescope, the WebbPSF program provides a PSF simulation tool in a flexible and easy-to-use software package available to the community and implemented in Python. The latest version of WebbPSF adds new support for spectroscopic modes of JWST NIRISS, MIRI, and NIRSpec, including modeling of slit losses and diffractive line spread functions. It also provides additional options for modeling instrument defocus and/or pupil misalignments. The software infrastructure of WebbPSF has received enhancements including improved parallelization, an updated graphical interface, a better configuration system, and improved documentation. We also present several comparisons of WebbPSF simulated PSFs to observed PSFs obtained using JWST's flight science instruments during recent cryovac tests. Excellent agreement to first order is achieved for all imaging modes cross-checked thus far, including tests for NIRCam, FGS, NIRISS, and MIRI. These tests demonstrate that WebbPSF model PSFs have good fidelity to the key properties of JWST's as-built science instruments.

  3. Individual 3D region-of-interest atlas of the human brain: automatic training point extraction for neural-network-based classification of brain tissue types

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-04-01

    Individual region-of-interest atlas extraction consists of two main parts: T1-weighted MRI grayscale images are classified into brain tissues types (gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB), background (BG)), followed by class image analysis to define automatically meaningful ROIs (e.g., cerebellum, cerebral lobes, etc.). The purpose of this algorithm is the automatic detection of training points for neural network-based classification of brain tissue types. One transaxial slice of the patient data set is analyzed. Background separation is done by simple region growing. A random generator extracts spatially uniformly distributed training points of class BG from that region. For WM training point extraction (TPE), the homogeneity operator is the most important. The most homogeneous voxels define the region for WM TPE. They are extracted by analyzing the cumulative histogram of the homogeneity operator response. Assuming a Gaussian gray value distribution in WM, a random number is used as a probabilistic threshold for TPE. Similarly, non-white matter and non-background regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is an additional feature. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated.

  4. Crustal Thickness and Moho Character of the Fast-Spreading East Pacific Rise Between 9º37.5'N and 9º57'N From Poststack and Prestack Time Migrated 3D MCS data

    NASA Astrophysics Data System (ADS)

    Nedimovic, M. R.; Aghaei, O.; Carbotte, S. M.; Carton, H. D.; Canales, J. P.

    2014-12-01

    We measured crustal thickness and mapped Moho transition zone (MTZ) character over an 880 km2 section of the fast-spreading East Pacific Rise (EPR) using the first full 3D multichannel seismic (MCS) dataset collected across a mid-ocean ridge (MOR). The 9°42'-9°57'N area was initially investigated using 3D poststack time migration, which was followed by application of 3D prestack time migration (PSTM) to the whole dataset. This first attempt at applying 3D PSTM to MCS data from a MOR environment resulted in the most detailed reflection images of a spreading center to date. MTZ reflections are for the first time imaged below the ridge axis away from axial discontinuities indicating that Moho is formed at zero age at least at some sections of the MOR system. The average crustal thickness and crustal velocity derived from PSTM are 5920±320 m and 6320±290 m/s, respectively. The average crustal thickness varies little from Pacific to Cocos plate suggesting mostly uniform crustal production in the last ~180 Ka. However, the crust thins by ~400 m from south to north. The MTZ reflections were imaged within ~92% of the study area, with ~66% of the total characterized by impulsive reflections interpreted to originate from a thin MTZ and 26% characterized by diffusive reflections interpreted to originate from a thick MTZ. The MTZ is dominantly diffusive at the southern (9°37.5'-9°40'N) and northern (9°51'-9°57'N) ends of the study area, and it is impulsive in the central region (9°42'-9°51'N). No data were collected between 9°40'N and 9°42'N. More efficient mantle melt extraction is inferred within the central region with greater proportion of the lower crust accreted from the axial magma lens than within the northern and southern sections. This along-axis variation in the crustal accretion style may be caused by interaction between the melt sources for the ridge and the local seamounts, which are present within the northern and southern survey sections. Third

  5. 3-D Resistivity Tomography for Cliff Stability Study at the D-Day Pointe du Hoc Historic Site in Normandy, France

    NASA Astrophysics Data System (ADS)

    Udphuay, S.; Everett, M. E.; Guenther, T.; Warden, R. R.

    2007-12-01

    The D-Day invasion site at Pointe du Hoc in Normandy, France is one of the most important World War II battlefields. The site remains today a valuable historic cultural resource. However the site is vulnerable to cliff collapses that could endanger the observation post building and U.S. Ranger memorial located just landward of the sea stack, and an anti-aircraft gun emplacement, Col. Rudder's command post, located on the cliff edge about 200 m east of the observation post. A 3-D resistivity tomography incorporating extreme topography is used in this study to provide a detailed site stability assessment with special attention to these two buildings. Multi-electrode resistivity measurements were made across the cliff face and along the top of the cliff around the two at-risk buildings to map major subsurface fracture zones and void spaces that could indicate possible accumulations and pathways of groundwater. The ingress of acidic groundwater through the underlying carbonate formations enlarges pre-existing tectonic fractures via limestone dissolution and weakens the overall structural integrity of the cliff. The achieved 3-D resistivity tomograms provide diagnostic subsurface resistivity distributions. Resistive zones associated with subsurface void spaces have been located. These void spaces constitute a stability geohazard as they become significant drainage routes during and after periods of heavy rainfalls.

  6. Optical test-benches for multiple source wavefront propagation and spatiotemporal point-spread function emulation.

    PubMed

    Weddell, Stephen J; Lambert, Andrew J

    2014-12-10

    Precise measurement of aberrations within an optical system is essential to mitigate combined effects of user-generated aberrations for the study of anisoplanatic imaging using optical test benches. The optical system point spread function (PSF) is first defined, and methods to minimize the effects of the optical system are discussed. User-derived aberrations, in the form of low-order Zernike ensembles, are introduced using a liquid crystal spatial light modulator (LC-SLM), and dynamic phase maps are used to study the spatiotemporal PSF. A versatile optical test bench is described, where the Shack Hartmann and curvature wavefront sensors are used to emulate the effects of wavefront propagation over time from two independent sources. PMID:25608061

  7. Determination of caustic surfaces using point spread function and ray Jacobian and Hessian matrices.

    PubMed

    Lin, Psang Dain

    2014-09-10

    Existing methods for determining caustic surfaces involve computing either the flux density singularity or the center of curvature of the wavefront. However, such methods rely rather heavily on ray tracing and finite difference methods for estimating the first- and second-order derivative matrices (i.e., Jacobian and Hessian matrices) of a ray. The main reason is that previously the analytical expressions of these two matrices have been tedious or even impossible. Accordingly, the present study proposes a robust numerical method for determining caustic surfaces based on a point spread function and the established analytical Jacobian and Hessian matrices of a ray by our group. It is shown that the proposed method provides a convenient and computationally straightforward means of determining the caustic surfaces of both simple and complex optical systems without the need for analytical equations, and is substantially different from the two existing methods. PMID:25321667

  8. Point spread function reconstruction from Woofer-Tweeter adaptive optics bench

    NASA Astrophysics Data System (ADS)

    Keskin, Onur; Conan, Rodolphe; Bradley, Colin

    2006-06-01

    This paper describes a model-based and experimental evaluation of a point spread function (PSF) reconstruction technique for a Dual Deformable Mirror (DM) Woofer-Tweeter (W/T) Adaptive Optics (AO) system. In the W/T architecture, the Woofer is a low-order-high-stroke DM, and it is used to compensate for the low-frequency-high-amplitude effects introduced by the atmospheric turbulence. The Tweeter is a high-order-low-stroke DM that is used to compensate for the high-frequency-low-amplitude effects introduced by the atmospheric turbulence. The research concept of having Dual DMs allows the W/T AO system to have a high degree of correction of large amplitude wavefront distortion. The role of the UVic AO bench is to demonstrate the closed-loop wavefront control feasibility for a W/T AO concept to be used on the science instruments of the Thirty Meter Telescope (TMT).

  9. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip

    2015-01-01

    Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.

  10. Duality between the dynamics of line-like brushes of point defects in 2D and strings in 3D in liquid crystals.

    PubMed

    Digal, Sanatan; Ray, Rajarshi; Saumia, P S; Srivastava, Ajit M

    2013-10-01

    We analyze the dynamics of dark brushes connecting point vortices of strength ±1 formed in the isotropic-nematic phase transition of a thin layer of nematic liquid crystals, using a crossed polarizer set up. The evolution of the brushes is seen to be remarkably similar to the evolution of line defects in a three-dimensional nematic liquid crystal system. Even phenomena like the intercommutativity of strings are routinely observed in the dynamics of brushes. We test the hypothesis of a duality between the two systems by determining exponents for the coarsening of total brush length with time as well as shrinking of the size of an isolated loop. Our results show scaling behavior for the brush length as well as the loop size with corresponding exponents in good agreement with the 3D case of string defects. PMID:24026004

  11. In-flight calibration of the Swift XRT Point Spread Function

    SciTech Connect

    Moretti, A.; Campana, S.; Chincarini, G.; Covino, S.; Romano, P.; Tagliaferri, G.; Capalbi, M.; Giommi, P.; Perri, M.; Cusumano, G.; La Parola, V.; Mangano, V.; Mineo, T.

    2006-05-19

    The Swift X-ray Telescope (XRT) is designed to make astrometric, spectroscopic and photometric observations of the X-ray emission from Gamma-ray bursts and their afterglows, in the energy band 0.2-10 keV. Here we report the results of the analysis of Swift XRT Point Spread Function (PSF) as measured in the first four months of the mission during the instrument calibration phase. The analysis includes the study of the PSF of different point-like sources both on-axis and off-axis with different spectral properties. We compare the in-flight data with the expectations from the on-ground calibration. On the basis of the calibration data we built an analytical model to reproduce the PSF as a function of the energy and the source position within the detector which can be applied in the PSF correction calculation for any extraction region geometry. All the results of this study are implemented in the standard public software.

  12. Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    SciTech Connect

    Marois, C; Lafreniere, D; Macintosh, B; Doyon, R

    2006-02-07

    For ground-based adaptive optics point source imaging, differential atmospheric refraction and flexure introduce a small drift of the point spread function (PSF) with time, and seeing and sky transmission variations modify the PSF flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected companions as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagraphy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.

  13. 3-D Deformation Field Of The 2010 El Mayor-Cucapah (Mexico) Earthquake From Matching Before To After Aerial Lidar Point Clouds

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Arrowsmith, R.; Krishnan, A. K.; Saripalli, S.; Oskin, M. E.; Arregui, S. M.; Limon, J. F.

    2012-12-01

    The Mw 7.2 El Mayor-Cucapah earthquake (EMCE) of 4 April 2010 generated a ~110 km long, NW-SE trending rupture, with normal and right-lateral slip in the order of 2-3m in the Sierra Cucapah, the northern half, where the surface rupture has the most outstanding expression. Vertical and horizontal surface displacements produced by the EMCE have been addressed separately by other authors with a variety of aerial and satellite remote sensing techniques. Slip variation along fault and post-seismic scarp erosion and diffusion have been estimated in other studies using terrestrial LiDAR (TLS) on segments of the rupture. To complement these other studies, we computed the 3D deformation field by comparing pre- to post-event point clouds from aerial LiDAR surveys. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3-dimensional surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising a translation and a rotation) that best aligns the pre- to post-event points. Testing on synthetic datasets perturbed with displacements of known magnitude showed that windows with dimensions of 100-200m gave the best results for datasets with these densities. Here we present the deformation field with detailed displacements in segments of the surface rupture where its expression was recognized by ICP from the point cloud matching, mainly the scarcely vegetated Sierra Cucapah with the Borrego and Paso Superior fault segments the most outstanding, where we are able to compare our results with values measured in the field and results from TLS reported in other works. EMC simulated displacement field for a

  14. The point spread function of the human head and its implications for transcranial current stimulation.

    PubMed

    Dmochowski, Jacek P; Bikson, Marom; Parra, Lucas C

    2012-10-21

    Rational development of transcranial current stimulation (tCS) requires solving the 'forward problem': the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the 'backward problem' in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode-cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown. PMID:23001485

  15. HST/WFC3 UVIS Detector: Dark, Charge Transfer Efficiency, and Point Spread Function Calibrations

    NASA Astrophysics Data System (ADS)

    Bourque, Matthew; Anderson, Jay; Baggett, Sylvia; Bowers, Ariel; MacKenty, John W.; Sahu, Kailash C.

    2015-08-01

    Wide Field Camera 3 (WFC3) is a fourth-generation imaging instrument on board the Hubble Space Telescope (HST) that was installed during Servicing Mission 4 in May 2009. As one of two channels available on WFC3, the UVIS detector is comprised of two e2v CCDs and is sensitive to ultraviolet and visible light. Here we provide updates to the characterization and monitoring of the UVIS performance and stability. We present the long-term growth of the dark current and the hot pixel population, as well as the evolution of Charge Transfer Efficiency (CTE). We also discuss updates to the UVIS dark calibration products, which are used to correct for dark current in science images. We examine the impacts of CTE losses and outline some techniques to mitigate CTE effects during and after observation by use of post-flash and pixel-based CTE corrections. Finally, we summarize an investigation of WFC3/UVIS Point Spread Functions (PSFs) and their potential use for characterizing the focus of the instrument.

  16. Axial super-localisation using rotating point spread functions shaped by polarisation-dependent phase modulation.

    PubMed

    Roider, Clemens; Jesacher, Alexander; Bernet, Stefan; Ritsch-Marte, Monika

    2014-02-24

    We present an approach for point spread function (PSF) engineering that allows one to shape the optical wavefront independently in both polarisation directions, with two adjacent phase masks displayed on a single liquid-crystal spatial light modulator (LC-SLM). The set-up employs a polarising beam splitter and a geometric image rotator to rectify and process both polarisation directions detected by the camera. We shape a single-lobe ("corkscrew") PSF that rotates upon defocus for each polarisation channel and combine the two polarisation channels with a relative 180° phase-shift on the computer, merging them into a single PSF that exhibits two lobes whose orientation contains information about the axial position. A major advantage lies in the possibility to measure and eliminate the aberrations in the two polarisation channels independently. We demonstrate axial super-localisation of isotropically emitting fluorescent nanoparticles. Our implementation of the single-lobe PSFs follows the method proposed by Prasad [Opt. Lett.38, 585 (2013)], and thus is to the best of our knowledge the first experimental realisation of this suggestion. For comparison we also study an approach with a rotating double-helix PSFs (in only one polarisation channel) and ascertain the trade-off between localisation precision and axial working range. PMID:24663724

  17. Point spread function modeling and image restoration for cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe

    2015-03-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)

  18. Imaging samples in silica aerogel using an experimental point spread function.

    PubMed

    White, Amanda J; Ebel, Denton S

    2015-02-01

    Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology. PMID:25517515

  19. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  20. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  1. Super Resolution Pump-Probe Microscopy with point spread function engineering

    NASA Astrophysics Data System (ADS)

    Mohajerani, Farzaneh

    Since the last decade, new techniques have made optical microscopy break the diffraction barrier of resolution where all of them are based on molecular fluorescence. Among them, stimulated emission depletion microscopy (STED) has reached less than 25 nm resolution by engineering the point spread function. However, the existing obstacles associated with fluorescence tagging makes it desirable to achieve label-free imaging. Recently the pump-probe method has made it possible to obtain image contrast from molecular absorption and vibration signatures that do not depend on fluorescence. We introduce Super Resolution Pump-Probe Microscopy (SRPPM), in which we combine both PSF engineering method and pump-probe method to achieve the goal of imaging non-fluorescent molecules with nanometer resolution. Our calculations for SRPPM show that we are able to reach less than 30nm resolution with the intensity of pump and probe beams that will not exceed 10 MW/cm2 . This intensity is much lower than the high intensities of the Doughnut beam in STED microscopy and it is compatible with bio-imaging goals of this microscope.

  2. Correction for collimator-detector response in SPECT using point spread function template.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A; Dewaraja, Yuni K

    2013-02-01

    Compensating for the collimator-detector response (CDR) in SPECT is important for accurate quantification. The CDR consists of both a geometric response and a septal penetration and collimator scatter response. The geometric response can be modeled analytically and is often used for modeling the whole CDR if the geometric response dominates. However, for radionuclides that emit medium or high-energy photons such as I-131, the septal penetration and collimator scatter response is significant and its modeling in the CDR correction is important for accurate quantification. There are two main methods for modeling the depth-dependent CDR so as to include both the geometric response and the septal penetration and collimator scatter response. One is to fit a Gaussian plus exponential function that is rotationally invariant to the measured point source response at several source-detector distances. However, a rotationally-invariant exponential function cannot represent the star-shaped septal penetration tails in detail. Another is to perform Monte-Carlo (MC) simulations to generate the depth-dependent point spread functions (PSFs) for all necessary distances. However, MC simulations, which require careful modeling of the SPECT detector components, can be challenging and accurate results may not be available for all of the different SPECT scanners in clinics. In this paper, we propose an alternative approach to CDR modeling. We use a Gaussian function plus a 2-D B-spline PSF template and fit the model to measurements of an I-131 point source at several distances. The proposed PSF-template-based approach is nearly non-parametric, captures the characteristics of the septal penetration tails, and minimizes the difference between the fitted and measured CDR at the distances of interest. The new model is applied to I-131 SPECT reconstructions of experimental phantom measurements, a patient study, and a MC patient simulation study employing the XCAT phantom. The proposed model

  3. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  4. The Effect of Point-spread Function Interaction with Radiance from Heterogeneous Scenes on Multitemporal Signature Analysis. [soybean stress

    NASA Technical Reports Server (NTRS)

    Duggin, M. J.; Schoch, L. B.

    1984-01-01

    The point-spread function is an important factor in determining the nature of feature types on the basis of multispectral recorded radiance, particularly from heterogeneous scenes and particularly from scenes which are imaged repetitively, in order to provide thematic characterization by means of multitemporal signature. To demonstrate the effect of the interaction of scene heterogeneity with the point spread function (PSF)1, a template was constructed from the line spread function (LSF) data for the thematic mapper photoflight model. The template was in 0.25 (nominal) pixel increments in the scan line direction across three scenes of different heterogeneity. The sensor output was calculated by considering the calculated scene radiance from each scene element occurring between the contours of the PSF template, plotted on a movable mylar sheet while it was located at a given position.

  5. Correlations of three-dimensional motion of chromosomal loci in yeast revealed by the double-helix point spread function microscope

    PubMed Central

    Backlund, Mikael P.; Joyner, Ryan; Weis, Karsten; Moerner, W. E.

    2014-01-01

    Single-particle tracking has been applied to study chromatin motion in live cells, revealing a wealth of dynamical behavior of the genomic material once believed to be relatively static throughout most of the cell cycle. Here we used the dual-color three-dimensional (3D) double-helix point spread function microscope to study the correlations of movement between two fluorescently labeled gene loci on either the same or different budding yeast chromosomes. We performed fast (10 Hz) 3D tracking of the two copies of the GAL locus in diploid cells in both activating and repressive conditions. As controls, we tracked pairs of loci along the same chromosome at various separations, as well as transcriptionally orthogonal genes on different chromosomes. We found that under repressive conditions, the GAL loci exhibited significantly higher velocity cross-correlations than they did under activating conditions. This relative increase has potentially important biological implications, as it might suggest coupling via shared silencing factors or association with decoupled machinery upon activation. We also found that on the time scale studied (∼0.1–30 s), the loci moved with significantly higher subdiffusive mean square displacement exponents than previously reported, which has implications for the application of polymer theory to chromatin motion in eukaryotes. PMID:25318676

  6. Characterization of a three-dimensional double-helix point-spread function for fluorescence microscopy in the presence of spherical aberration

    NASA Astrophysics Data System (ADS)

    Ghosh, Sreya; Preza, Chrysanthe

    2013-03-01

    We characterize the three-dimensional (3-D) double-helix (DH) point-spread function (PSF) for depth-variant fluorescence microscopy imaging motivated by our interest to integrate the DH-PSF in computational optical sectioning microscopy (COSM) imaging. Physical parameters, such as refractive index and thickness variability of imaging layers encountered in 3-D microscopy give rise to depth-induced spherical aberration (SA) that change the shape of the PSF at different focusing depths and render computational approaches less practical. Theoretical and experimental studies performed to characterize the DH-PSF under varying imaging conditions are presented. Results show reasonable agreement between theoretical and experimental DH-PSFs suggesting that our model can predict the main features of the data. The depth-variability of the DH-PSF due to SA, quantified using a normalized mean square error, shows that the DH-PSF is more robust to SA than the conventional PSF. This result is also supported by the frequency analysis of the DH-PSF shown. Our studies suggest that further investigation of the DH-PSF's use in COSM is warranted, and that particle localization accuracy using the DH-PSF calibration curve in the presence of SA can be improved by accounting for the axial shift due to SA.

  7. POINT-SPREAD FUNCTIONS FOR THE EXTREME-ULTRAVIOLET CHANNELS OF SDO/AIA TELESCOPES

    SciTech Connect

    Poduval, B.; DeForest, C. E.; Schmelz, J. T.; Pathak, S.

    2013-03-10

    We present the stray-light point-spread functions (PSFs) and their inverses we characterized for the Atmospheric Imaging Assembly (AIA) EUV telescopes on board the Solar Dynamics Observatory (SDO) spacecraft. The inverse kernels are approximate inverses under convolution. Convolving the original Level 1 images with them produces images with improved stray-light characteristics. We demonstrate the usefulness of these PSFs by applying them to two specific cases: photometry and differential emission measure (DEM) analysis. The PSFs consist of a narrow Gaussian core, a diffraction component, and a diffuse component represented by the sum of a Gaussian-truncated Lorentzian and a shoulder Gaussian. We determined the diffraction term using the measured geometry of the diffraction pattern identified in flare images and the theoretically computed intensities of the principal maxima of the first few diffraction orders. To determine the diffuse component, we fitted its parameterized model using iterative forward-modeling of the lunar interior in the SDO/AIA images from the 2011 March 4 lunar transit. We find that deconvolution significantly improves the contrast in dark features such as miniature coronal holes, though the effect was marginal in bright features. On a percentage-scattering basis, the PSFs for SDO/AIA are better by a factor of two than that of the EUV telescope on board the Transition Region And Coronal Explorer mission. A preliminary analysis suggests that deconvolution alone does not affect DEM analysis of small coronal loop segments with suitable background subtraction. We include the derived PSFs and their inverses as supplementary digital materials.

  8. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  9. Fast and precise 3D fluorophore localization by gradient fitting

    NASA Astrophysics Data System (ADS)

    Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang

    2016-02-01

    Astigmatism imaging is widely used to encode the 3D position of fluorophore in single-particle tracking and super-resolution localization microscopy. Here, we present a fast and precise localization algorithm based on gradient fitting to decode the 3D subpixel position of the fluorophore. This algorithm determines the center of the emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the emitter in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising online reconstruction method for 3D super-resolution microscopy.

  10. An effective method to verify line and point spread functions measured in computed tomography

    SciTech Connect

    Ohkubo, Masaki; Wada, Sinichi; Matsumoto, Toru; Nishizawa, Kanae

    2006-08-15

    This study describes an effective method for verifying line spread function (LSF) and point spread function (PSF) measured in computed tomography (CT). The CT image of an assumed object function is known to be calculable using LSF or PSF based on a model for the spatial resolution in a linear imaging system. Therefore, the validities of LSF and PSF would be confirmed by comparing the computed images with the images obtained by scanning phantoms corresponding to the object function. Differences between computed and measured images will depend on the accuracy of the LSF and PSF used in the calculations. First, we measured LSF in our scanner, and derived the two-dimensional PSF in the scan plane from the LSF. Second, we scanned the phantom including uniform cylindrical objects parallel to the long axis of a patient's body (z direction). Measured images of such a phantom were characterized according to the spatial resolution in the scan plane, and did not depend on the spatial resolution in the z direction. Third, images were calculated by two-dimensionally convolving the true object as a function of space with the PSF. As a result of comparing computed images with measured ones, good agreement was found and was demonstrated by image subtraction. As a criterion for evaluating quantitatively the overall differences of images, we defined the normalized standard deviation (SD) in the differences between computed and measured images. These normalized SDs were less than 5.0% (ranging from 1.3% to 4.8%) for three types of image reconstruction kernels and for various diameters of cylindrical objects, indicating the high accuracy of PSF and LSF that resulted in successful measurements. Further, we also obtained another LSF utilizing an inappropriate manner, and calculated the images as above. This time, the computed images did not agree with the measured ones. The normalized SDs were 6.0% or more (ranging from 6.0% to 13.8%), indicating the inaccuracy of the PSF and LSF. We

  11. Comparative point-spread function calculations for the MOMS-1, Thematic Mapper and SPOT-HRV instruments

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Nickeson, J. E.; Bodechtel, J.; Zilger, J.

    1988-01-01

    Point-spread functions (PSF) comparisons were made between the Modular Optoelectronic Multispectral Scanner (MOMS-01), the LANDSAT Thematic Mapper (TM) and the SPOT-HRV instruments, principally near Lake Nakuru, Kenya. The results, expressed in terms of the width of the point spread functions at the 50 percent power points as determined from the in-scene analysis show that the TM has a PSF equal to or narrower than the MOMS-01 instrument (50 to 55 for the TM versus 50 to 68 for the MOMS). The SPOT estimates of the PSF range from 36 to 40. When the MOMS results are adjusted for differences in edge scanning as compared to the TM and SPOT, they are nearer 40 in the 575 to 625 nm band.

  12. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  13. 3-D laser radar simulation for autonomous spacecraft landing

    NASA Technical Reports Server (NTRS)

    Reiley, Michael F.; Carmer, Dwayne C.; Pont, W. F.

    1991-01-01

    A sophisticated 3D laser radar sensor simulation, developed and applied to the task of autonomous hazard detection and avoidance, is presented. This simulation includes a backward ray trace to sensor subpixels, incoherent subpixel integration, range dependent noise, sensor point spread function effects, digitization noise, and AM-CW modulation. Specific sensor parameters, spacecraft lander trajectory, and terrain type have been selected to generate simulated sensor data.

  14. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  15. Impact of device size and thickness of Al2O 3 film on the Cu pillar and resistive switching characteristics for 3D cross-point memory application.

    PubMed

    Panja, Rajeswar; Roy, Sourav; Jana, Debanjan; Maikap, Siddheswar

    2014-12-01

    Impact of the device size and thickness of Al2O3 film on the Cu pillars and resistive switching memory characteristics of the Al/Cu/Al2O3/TiN structures have been investigated for the first time. The memory device size and thickness of Al2O3 of 18 nm are observed by transmission electron microscope image. The 20-nm-thick Al2O3 films have been used for the Cu pillar formation (i.e., stronger Cu filaments) in the Al/Cu/Al2O3/TiN structures, which can be used for three-dimensional (3D) cross-point architecture as reported previously Nanoscale Res. Lett.9:366, 2014. Fifty randomly picked devices with sizes ranging from 8 × 8 to 0.4 × 0.4 μm(2) have been measured. The 8-μm devices show 100% yield of Cu pillars, whereas only 74% successful is observed for the 0.4-μm devices, because smaller size devices have higher Joule heating effect and larger size devices show long read endurance of 10(5) cycles at a high read voltage of -1.5 V. On the other hand, the resistive switching memory characteristics of the 0.4-μm devices with a 2-nm-thick Al2O3 film show superior as compared to those of both the larger device sizes and thicker (10 nm) Al2O3 film, owing to higher Cu diffusion rate for the larger size and thicker Al2O3 film. In consequence, higher device-to-device uniformity of 88% and lower average RESET current of approximately 328 μA are observed for the 0.4-μm devices with a 2-nm-thick Al2O3 film. Data retention capability of our memory device of >48 h makes it a promising one for future nanoscale nonvolatile application. This conductive bridging resistive random access memory (CBRAM) device is forming free at a current compliance (CC) of 30 μA (even at a lowest CC of 0.1 μA) and operation voltage of ±3 V at a high resistance ratio of >10(4). PMID:26088986

  16. Crustal thickness and Moho character of the fast-spreading East Pacific Rise from 9°42'N to 9°57'N from poststack-migrated 3-D MCS data

    NASA Astrophysics Data System (ADS)

    Aghaei, Omid; Nedimović, Mladen R.; Carton, Helene; Carbotte, Suzanne M.; Canales, J. Pablo; Mutter, John C.

    2014-03-01

    computed crustal thickness (5740 ± 270 m) and mapped Moho reflection character using 3-D seismic data covering 658 km2 of the fast-spreading East Pacific Rise (EPR) from 9°42'N to 9°57'N. Moho reflections are imaged within ˜87% of the study area. Average crustal thickness varies little between large sections of the study area suggesting regionally uniform crustal production in the last ˜180 Ka. However, individual crustal thickness measurements differ by as much as 1.75 km indicating that the mantle melt delivery has not been uniform. Third-order, but not fourth-order ridge discontinuities are associated with changes in the Moho reflection character and/or near-axis crustal thickness. This suggests that the third-order segmentation is governed by melt distribution processes within the uppermost mantle while the fourth-order ridge segmentation arises from midcrustal to upper-crustal processes. In this light, we assign fourth-order ridge discontinuity status to the debated ridge segment boundary at ˜9°45'N and third-order status at ˜9°51.5'N to the ridge segment boundary previously interpreted as a fourth-order discontinuity. Our seismic results also suggest that the mechanism of lower-crustal accretion varies along the investigated section of the EPR but that the volume of melt delivered to the crust is mostly uniform. More efficient mantle melt extraction is inferred within the southern half of our survey area with greater proportion of the lower crust accreted from the axial magma lens than that for the northern half. This south-to-north variation in the crustal accretion style may be caused by interaction between the melt sources for the ridge and the Lamont seamounts.

  17. Novel 3D light microscopic analysis of IUGR placentas points to a morphological correlate of compensated ischemic placental disease in humans

    PubMed Central

    Haeussner, Eva; Schmitz, Christoph; Frank, Hans-Georg; Edler von Koch, Franz

    2016-01-01

    The villous tree of the human placenta is a complex three-dimensional (3D) structure with branches and nodes at the feto-maternal border in the key area of gas and nutrient exchange. Recently we introduced a novel, computer-assisted 3D light microscopic method that enables 3D topological analysis of branching patterns of the human placental villous tree. In the present study we applied this novel method to the 3D architecture of peripheral villous trees of placentas from patients with intrauterine growth retardation (IUGR placentas), a severe obstetric syndrome. We found that the mean branching angle of branches in terminal positions of the villous trees was significantly different statistically between IUGR placentas and clinically normal placentas. Furthermore, the mean tortuosity of branches of villous trees in directly preterminal positions was significantly different statistically between IUGR placentas and clinically normal placentas. We show that these differences can be interpreted as consequences of morphological adaptation of villous trees between IUGR placentas and clinically normal placentas, and may have important consequences for the understanding of the morphological correlates of the efficiency of the placental villous tree and their influence on fetal development. PMID:27045698

  18. Invariant joint distribution of a stationary random field and its derivatives: Euler characteristic and critical point counts in 2 and 3D

    SciTech Connect

    Pogosyan, Dmitry; Gay, Christophe; Pichon, Christophe

    2009-10-15

    The full moments expansion of the joint probability distribution of an isotropic random field, its gradient, and invariants of the Hessian are presented in 2 and 3D. It allows for explicit expression for the Euler characteristic in ND and computation of extrema counts as functions of the excursion set threshold and the spectral parameter, as illustrated on model examples.

  19. Novel 3D light microscopic analysis of IUGR placentas points to a morphological correlate of compensated ischemic placental disease in humans.

    PubMed

    Haeussner, Eva; Schmitz, Christoph; Frank, Hans-Georg; Edler von Koch, Franz

    2016-01-01

    The villous tree of the human placenta is a complex three-dimensional (3D) structure with branches and nodes at the feto-maternal border in the key area of gas and nutrient exchange. Recently we introduced a novel, computer-assisted 3D light microscopic method that enables 3D topological analysis of branching patterns of the human placental villous tree. In the present study we applied this novel method to the 3D architecture of peripheral villous trees of placentas from patients with intrauterine growth retardation (IUGR placentas), a severe obstetric syndrome. We found that the mean branching angle of branches in terminal positions of the villous trees was significantly different statistically between IUGR placentas and clinically normal placentas. Furthermore, the mean tortuosity of branches of villous trees in directly preterminal positions was significantly different statistically between IUGR placentas and clinically normal placentas. We show that these differences can be interpreted as consequences of morphological adaptation of villous trees between IUGR placentas and clinically normal placentas, and may have important consequences for the understanding of the morphological correlates of the efficiency of the placental villous tree and their influence on fetal development. PMID:27045698

  20. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  2. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  3. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction

    PubMed Central

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-01-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  4. 3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction.

    PubMed

    Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie

    2015-03-01

    Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314

  5. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  6. Precise Three-Dimensional Scan-Free Multiple-Particle Tracking over Large Axial Ranges with Tetrapod Point Spread Functions.

    PubMed

    Shechtman, Yoav; Weiss, Lucien E; Backer, Adam S; Sahl, Steffen J; Moerner, W E

    2015-06-10

    We employ a novel framework for information-optimal microscopy to design a family of point spread functions (PSFs), the Tetrapod PSFs, which enable high-precision localization of nanoscale emitters in three dimensions over customizable axial (z) ranges of up to 20 μm with a high numerical aperture objective lens. To illustrate, we perform flow profiling in a microfluidic channel and show scan-free tracking of single quantum-dot-labeled phospholipid molecules on the surface of living, thick mammalian cells. PMID:25939423

  7. Imaging performance of annular apertures. IV - Apodization and point spread functions. V - Total and partial energy integral functions

    NASA Technical Reports Server (NTRS)

    Tschunko, H. F. A.

    1983-01-01

    Reference is made to a study by Tschunko (1979) in which it was discussed how apodization modifies the modulation transfer function for various central obstruction ratios. It is shown here how apodization, together with the central obstruction ratio, modifies the point spread function, which is the basic element for the comparison of imaging performance and for the derivation of energy integrals and other functions. At high apodization levels and lower central obstruction (less than 0.1), new extended radial zones are formed in the outer part of the central ring groups. These transmutation of the image functions are of more than theoretical interest, especially if the irradiance levels in the outer ring zones are to be compared to the background irradiance levels. Attention is then given to the energy distribution in point images generated by annular apertures apodized by various transmission functions. The total energy functions are derived; partial energy integrals are determined; and background irradiance functions are discussed.

  8. Fragmentary area repairing on the edge of 3D laser point cloud based on edge extracting of images and LS-SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Ziming; Hao, Xiangyang; Liu, Songlin; Zhao, Song

    2011-06-01

    In the process of hole-repairing in point cloud, it's difficult to repair by the indeterminate boundary of fragmentary area in the edge of point cloud. In view of this condition, the article advances a method of Fragmentary area repairing on the edge of point cloud based on edge extracting of image and LS-SVM. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image. Then project the training points and sub-pixel edge to the characteristic plane that has being constructed to confirm the bound and position for re-sampling. At last get the equation of fragmentary area to accomplish the repairing by Least-Squares Support Vector Machines. The experimental results demonstrate that the method guarantees accurate fine repairing.

  9. Volumetric (3D) bladder dose parameters are more reproducible than point (2D) dose parameters in vaginal vault high-dose-rate brachytherapy

    PubMed Central

    Sapienza, Lucas Gomes; Flosi, Adriana; Aiza, Antonio; de Assis Pellizzon, Antonio Cassio; Chojniak, Rubens; Baiocchi, Glauco

    2016-01-01

    There is no consensus on the use of computed tomography in vaginal cuff brachytherapy (VCB) planning. The purpose of this study was to prospectively determine the reproducibility of point bladder dose parameters (DICRU and maximum dose), compared with volumetric-based parameters. Twenty-two patients who were treated with high-dose-rate (HDR) VCB underwent simulation by computed tomography (CT-scan) with a Foley catheter at standard tension (position A) and extra tension (position B). CT-scan determined the bladder ICRU dose point in both positions and compared the displacement and recorded dose. Volumetric parameters (D0.1cc, D1.0cc, D2.0cc, D4.0cc and D50%) and point dose parameters were compared. The average spatial shift in ICRU dose point in the vertical, longitudinal and lateral directions was 2.91 mm (range: 0.10–9.00), 12.04 mm (range: 4.50–24.50) and 2.65 mm (range: 0.60–8.80), respectively. The DICRU ratio for positions A and B was 1.64 (p < 0.001). Moreover, a decrease in Dmax was observed (p = 0.016). Tension level of the urinary catheter did not affect the volumetric parameters. Our data suggest that point parameters (DICRU and Dmax) are not reproducible and are not the ideal choice for dose reporting. PMID:27296459

  10. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  11. 3D Modeling By Consolidation Of Independent Geometries Extracted From Point Clouds - The Case Of The Modeling Of The Turckheim's Chapel (Alsace, France)

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Fabre, Ph.; Schlussel, B.

    2014-06-01

    Turckheim is a small town located in Alsace, north-east of France. In the heart of the Alsatian vineyard, this city has many historical monuments including its old church. To understand the effectiveness of the project described in this paper, it is important to have a look at the history of this church. Indeed there are many historical events that explain its renovation and even its partial reconstruction. The first mention of a christian sanctuary in Turckheim dates back to 898. It will be replaced in the 12th century by a roman church (chapel), which subsists today as the bell tower. Touched by a lightning in 1661, the tower then was enhanced. In 1736, it was repaired following damage sustained in a tornado. In 1791, the town installs an organ to the church. Last milestone, the church is destroyed by fire in 1978. The organ, like the heart of the church will then have to be again restored (1983) with a simplified architecture. From this heavy and rich past, it unfortunately and as it is often the case, remains only very few documents and information available apart from facts stated in some sporadic writings. And with regard to the geometry, the positioning, the physical characteristics of the initial building, there are very little indication. Some assumptions of positions and right-of-way were well issued by different historians or archaeologists. The acquisition and 3D modeling project must therefore provide the current state of the edifice to serve as the basis of new investigations and for the generation of new hypotheses on the locations and historical shapes of this church and its original chapel (Fig. 1)

  12. An approach for de-identification of point locations of livestock premises for further use in disease spread modeling.

    PubMed

    Martin, Michael K; Helm, Julie; Patyk, Kelly A

    2015-06-15

    We describe a method for de-identifying point location data used for disease spread modeling to allow data custodians to share data with modeling experts without disclosing individual farm identities. The approach is implemented in an open-source software program that is described and evaluated here. The program allows a data custodian to select a level of de-identification based on the K-anonymity statistic. The program converts a file of true farm locations and attributes into a file appropriate for use in disease spread modeling with the locations randomly modified to prevent re-identification based on location. Important epidemiological relationships such as clustering are preserved to as much as possible to allow modeling similar to those using true identifiable data. The software implementation was verified by visual inspection and basic descriptive spatial analysis of the output. Performance is sufficient to allow de-identification of even large data sets on desktop computers available to any data custodian. PMID:25944175

  13. Accurate 3D point cloud comparison and volumetric change analysis of Terrestrial Laser Scan data in a hard rock coastal cliff environment

    NASA Astrophysics Data System (ADS)

    Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.

    2013-12-01

    Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in

  14. Noise suppression of point spread functions and its influence on deconvolution of three-dimensional fluorescence microscopy image sets.

    PubMed

    Lai, X; Lin, Zhiping; Ward, E S; Ober, R J

    2005-01-01

    The point spread function (PSF) is of central importance in the image restoration of three-dimensional image sets acquired by an epifluorescent microscope. Even though it is well known that an experimental PSF is typically more accurate than a theoretical one, the noise content of the experimental PSF is often an obstacle to its use in deconvolution algorithms. In this paper we apply a recently introduced noise suppression method to achieve an effective noise reduction in experimental PSFs. We show with both simulated and experimental three-dimensional image sets that a PSF that is smoothed with this method leads to a significant improvement in the performance of deconvolution algorithms, such as the regularized least-squares algorithm and the accelerated Richardson-Lucy algorithm. PMID:15655067

  15. Characterization of the point spread function and modulation transfer function of scattered radiation using a digital imaging system.

    PubMed

    Boone, J M; Arnold, B A; Seibert, J A

    1986-01-01

    A digital radiographic system was used to measure the distribution of scattered x radiation from uniform slabs of Lucite at various thicknesses. Using collimation and air gap techniques, [primary + scatter] images and primary images were digitally acquired, and subtracted to obtain scatter images. The scatter distributions measured using small circular apertures were computer fit to an analytical function, representing the circular aperture function convolved with a modified Gaussian point spread function (PSF). On the basis of goodness of fit criterion, the proposed Gaussian function is a very good model for the scatter PSF. The measured scatter PSF's are reported for various Lucite thicknesses. Using the PSF's, the modulation transfer functions are calculated, and this spatial frequency information may have value in analytical scatter removal techniques, grid design, and air gap optimization. PMID:3702823

  16. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  17. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461

  18. Intratumoral spread of wild-type adenovirus is limited after local injection of human xenograft tumors: virus persists and spreads systemically at late time points.

    PubMed

    Sauthoff, Harald; Hu, Jing; Maca, Cielo; Goldman, Michael; Heitner, Sheila; Yee, Herman; Pipiya, Teona; Rom, William N; Hay, John G

    2003-03-20

    Oncolytic replicating adenoviruses are a promising new modality for the treatment of cancer. Despite the assumed biologic advantage of continued viral replication and spread from infected to uninfected cancer cells, early clinical trials demonstrate that the efficacy of current vectors is limited. In xenograft tumor models using immune-incompetent mice, wild-type adenovirus is also rarely able to eradicate established tumors. This suggests that innate immune mechanisms may clear the virus or that barriers within the tumor prevent viral spread. The aim of this study was to evaluate the kinetics of viral distribution and spread after intratumoral injection of virus in a human tumor xenograft model. After intratumoral injection of wild-type virus, high levels of titratable virus persisted within the xenograft tumors for at least 8 weeks. Virus distribution within the tumors as determined by immunohistochemistry was patchy, and virus-infected cells appeared to be flanked by tumor necrosis and connective tissue. The close proximity of virus-infected cells to the tumor-supporting structure, which is of murine origin, was clearly demonstrated using a DNA probe that specifically hybridizes to the B1 murine DNA repeat. Importantly, although virus was cleared from the circulation 6 hr after intratumoral injection, after 4 weeks systemic spread of virus was detected. In addition, vessels of infected tumors were surrounded by necrosis and an advancing rim of virus-infected tumor cells, suggesting reinfection of the xenograft tumor through the vasculature. These data suggest that human adenoviral spread within tumor xenografts is impaired by murine tumor-supporting structures. In addition, there is evidence for continued viral replication within the tumor, with subsequent systemic dissemination and reinfection of tumors via the tumor vasculature. Despite the limitations of immune-incompetent models, an understanding of the interactions between the virus and the tumor

  19. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  20. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x

  1. Extracting full-field dynamic strain on a wind turbine rotor subjected to arbitrary excitations using 3D point tracking and a modal expansion technique

    NASA Astrophysics Data System (ADS)

    Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter

    2015-09-01

    Health monitoring of rotating structures such as wind turbines and helicopter rotors is generally performed using conventional sensors that provide a limited set of data at discrete locations near or on the hub. These sensors usually provide no data on the blades or inside them where failures might occur. Within this paper, an approach was used to extract the full-field dynamic strain on a wind turbine assembly subject to arbitrary loading conditions. A three-bladed wind turbine having 2.3-m long blades was placed in a semi-built-in boundary condition using a hub, a machining chuck, and a steel block. For three different test cases, the turbine was excited using (1) pluck testing, (2) random impacts on blades with three impact hammers, and (3) random excitation by a mechanical shaker. The response of the structure to the excitations was measured using three-dimensional point tracking. A pair of high-speed cameras was used to measure displacement of optical targets on the structure when the blades were vibrating. The measured displacements at discrete locations were expanded and applied to the finite element model of the structure to extract the full-field dynamic strain. The results of the paper show an excellent correlation between the strain predicted using the proposed approach and the strain measured with strain-gages for each of the three loading conditions. The approach used in this paper to predict the strain showed higher accuracy than the digital image correlation technique. The new expansion approach is able to extract dynamic strain all over the entire structure, even inside the structure beyond the line of sight of the measurement system. Because the method is based on a non-contacting measurement approach, it can be readily applied to a variety of structures having different boundary and operating conditions, including rotating blades.

  2. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  3. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  4. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  5. 3D-MRI rendering of the anatomical structures related to acupuncture points of the Dai mai, Yin qiao mai and Yang qiao mai meridians within the context of the WOMED concept of lateral tension: implications for musculoskeletal disease

    PubMed Central

    Moncayo, Roy; Rudisch, Ansgar; Kremser, Christian; Moncayo, Helga

    2007-01-01

    Background A conceptual model of lateral muscular tension in patients presenting thyroid associated ophthalmopathy (TAO) has been recently described. Clinical improvement has been achieved by using acupuncture on points belonging to the so-called extraordinary meridians. The aim of this study was to characterize the anatomical structures related to these acupuncture points by means of 3D MRI image rendering relying on external markers. Methods The investigation was carried out the index case patient of the lateral tension model. A licensed medical acupuncture practitioner located the following acupuncture points: 1) Yin qiao mai meridian (medial ankle): Kidney 3, Kidney 6, the plantar Kidney 6 (Nan jing description); 2) Yang qiao mai meridian (lateral ankle): Bladder 62, Bladder 59, Bladder 61, and the plantar Bladder 62 (Nan jing description); 3) Dai mai meridian (wait): Liver 13, Gall bladder 26, Gall bladder 27, Gall bladder 28, and Gall bladder 29. The points were marked by taping a nitro-glycerin capsule on the skin. Imaging was done on a Siemens Magnetom Avanto MR scanner using an array head and body coil. Mainly T1-weighted imaging sequences, as routinely used for patient exams, were used to obtain multi-slice images. The image data were rendered in 3D modus using dedicated software (Leonardo, Siemens). Results Points of the Dai mai meridian – at the level of the waist – corresponded to the obliquus externus abdominis and the obliquus internus abdominis. Points of the Yin qiao mai meridian – at the medial side of the ankle – corresponded to tendinous structures of the flexor digitorum longus as well as to muscular structures of the abductor hallucis on the foot sole. Points of the Yang qiao mai meridian – at the lateral side of the ankle – corresponded to tendinous structures of the peroneus brevis, the peroneous longus, and the lateral surface of the calcaneus and close to the foot sole to the abductor digiti minimi. Conclusion This non

  6. ELLIPTICAL-WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. II. POINT-SPREAD FUNCTION CORRECTION AND APPLICATION TO A370

    SciTech Connect

    Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2012-04-01

    We developed a new method (E-HOLICs) of estimating gravitational shear by adopting an elliptical weight function to measure background galaxy images in our previous paper. Following the previous paper, in which an isotropic point-spread function (PSF) correction is calculated, in this paper we consider an anisotropic PSF correction in order to apply E-HOLICs to real data. As an example, E-HOLICs is applied to Subaru data of the massive and compact galaxy cluster A370 and is able to detect double peaks in the central region of the cluster consistent with the analysis of strong lensing. We also study the systematic error in E-HOLICs using STEP2 simulation. In particular, we consider the dependences of the signal-to-noise ratio (S/N) of background galaxies in the shear estimation. Although E-HOLICs does improve the systematic error due to the ellipticity dependence as shown in Paper I, a systematic error due to the S/N dependence remains, namely, E-HOLICs underestimates shear when background galaxies with low S/N objects are used. We discuss a possible improvement of the S/N dependence.

  7. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    SciTech Connect

    Schaefferkoetter, Joshua; Ouyang, Jinsong; Rakvongthai, Yothin; El Fakhri, Georges; Nappi, Carmela

    2014-06-15

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as compared to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.

  8. Impact of the point spread function on maximum standardized uptake value measurements in patients with pulmonary cancer.

    PubMed

    Gellee, S; Page, J; Sanghera, B; Payoux, P; Wagner, Thomas

    2014-05-01

    Maximum standardized uptake value (SUVmax) from fluorodeoxyglucose (FDG) positron emission tomography (PET) scans is a semi quantitative measure that is increasingly used in the clinical practice for diagnostic and therapeutic response assessment purposes. Technological advances such as the implementation of the point spread function (PSF) in the reconstruction algorithm have led to higher signal to noise ratio and increased spatial resolution. The impact on SUVmax measurements has not been studied in clinical setting. We studied the impact of PSF on SUVmax in 30 consecutive lung cancer patients. SUVmax values were measured on PET-computed tomography (CT) scans reconstructed iteratively with and without PSF (respectively high-definition [HD] and non-HD). HD SUVmax values were significantly higher than non-HD SUVmax. There was excellent correlation between HD and non-HD values. Details of reconstruction and PSF implementation in particular have important consequences on SUV values. Nuclear Medicine physicians and radiologists should be aware of the reconstruction parameters of PET-CT scans when they report or rely on SUV measurements. PMID:25191128

  9. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    PubMed Central

    Schaefferkoetter, Joshua; Ouyang, Jinsong; Rakvongthai, Yothin; Nappi, Carmela; El Fakhri, Georges

    2014-01-01

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as compared to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance. PMID:24877836

  10. Principal Component Analysis of the Time- and Position-dependent Point-Spread Function of the Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Jee, M. J.; Blakeslee, J. P.; Sirianni, M.; Martel, A. R.; White, R. L.; Ford, H. C.

    2007-12-01

    We describe the time- and position-dependent point-spread function (PSF) variation of the wide-field channel (WFC) of the Advanced Camera for Surveys (ACS) with the principal component analysis (PCA) technique. The time-dependent change is caused by the temporal variation of the HST focus, whereas the position-dependent PSF variation in ACS WFC at a given focus is mainly the result of changes in aberrations and charge diffusion across the detector, which appear as position-dependent changes in the elongation of the astigmatic core and blurring of the PSF, respectively. Using ˜ 20 ) of principal components or "eigen-PSFs" per exposure can robustly reproduce the observed variation of the ellipticity and size of the PSF. Our primary interest in this investigation is the application of this PSF library to precision weak-lensing analyses, where accurate knowledge of the instrument's PSF is crucial. However, the high fidelity of the model judged from the nice agreement with observed PSFs suggests that the model is potentially also useful in other applications, such as crowded field stellar photometry, galaxy profile fitting, AGN studies, etc., which similarly demand a fair knowledge of the PSFs at objects' locations. Our PSF models, applicable to any WFC image rectified with the Lanczos3 kernel, are publicly available.

  11. Fast and precise point spread function measurements of IR optics at extreme temperatures based on reversed imaging conditions

    NASA Astrophysics Data System (ADS)

    Melzer, Volker; Heckmann, Hans-Georg; Ritter, Christian; Barenz, Joachim; Raab, Michael

    2010-04-01

    Point Spread Function (PSF), Modulation Transfer Function (MTF) and Ensquared Energy (EE) are important performance indicators of optical systems for surveillance, imaging and target tracking applications. We report on the development of a new measurement method which facilitates fast real time measurement of the two dimensional PSF and related performance parameters of a MWIR optical module under room temperature as well as under extreme temperature conditions. Our new measurement setup uses the law of reversibility of optical paths to capture a highly resolved, magnified image of the PSF. By using of an easy add-on thermally insulating enclosure the optical module can be exposed to and measured under both variable high and low temperatures (-50°C up to 90°C) without any external impact on the measurement. Also line of sight and various off-axis measurements are possible. Common PSF and MTF measurement methods need much more correction algorithms, whilst our method requires mainly a pinhole diameter correction only and allows fast measurements of optical parameters under temperature as well as fast and easy adjustment. Additionally comparison of the captured, highly resolved PSF with optical design data enables purposeful theoretical investigation of occurring optical artifacts.

  12. A Likelihood Method for Determining the On-orbit Point-Spread Function of the Fermi Large-Area Telescope

    NASA Astrophysics Data System (ADS)

    Roth, Marshall

    The Large-Area Telescope (LAT) on the Fermi gamma-Ray Space Telescope is a pair-conversion gamma-ray telescope with unprecedented capability to image astrophysical gamma-ray sources between 20 MeV and 300 GeV. The pre-launch performance of the LAT, decomposed into effective area, energy and angular dispersions, were determined through extensive Monte Carlo (MC) simulations and beam tests. The point-spread function (PSF) characterizes the angular distribution of reconstructed photons as a function of energy and geometry in the detector. Here we present a set of likelihood analyses of LAT data based on the spatial and spectral properties of sources, including a determination of the PSF on orbit. We find that the PSF on orbit is generally broader than the MC at energies above 3 GeV and consider several systematic effects to explain this difference. We also investigated several possible spatial models for pair-halo emission around BL Lac AGN and found no evidence for a component with spatial extension larger than the PSF.

  13. Depth profiling of gold nanoparticles and characterization of point spread functions in reconstructed and human skin using multiphoton microscopy.

    PubMed

    Labouta, Hagar I; Hampel, Martina; Thude, Sibylle; Reutlinger, Katharina; Kostka, Karl-Heinz; Schneider, Marc

    2012-01-01

    Multiphoton microscopy has become popular in studying dermal nanoparticle penetration. This necessitates studying the imaging parameters of multiphoton microscopy in skin as an imaging medium, in terms of achievable detection depths and the resolution limit. This would simulate real-case scenarios rather than depending on theoretical values determined under ideal conditions. This study has focused on depth profiling of sub-resolution gold nanoparticles (AuNP) in reconstructed (fixed and unfixed) and human skin using multiphoton microscopy. Point spread functions (PSF) were determined for the used water-immersion objective of 63×/NA = 1.2. Factors such as skin-tissue compactness and the presence of wrinkles were found to deteriorate the accuracy of depth profiling. A broad range of AuNP detectable depths (20-100 μm) in reconstructed skin was observed. AuNP could only be detected up to ∼14 μm depth in human skin. Lateral (0.5 ± 0.1 μm) and axial (1.0 ± 0.3 μm) PSF in reconstructed and human specimens were determined. Skin cells and intercellular components didn't degrade the PSF with depth. In summary, the imaging parameters of multiphoton microscopy in skin and practical limitations encountered in tracking nanoparticle penetration using this approach were investigated. PMID:22147676

  14. Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.

    PubMed

    Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing

    2012-11-01

    In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model. PMID:23201786

  15. Long-term measurements of atmospheric point-spread functions over littoral waters as determined by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    de Jong, Arie N.; Schwering, Piet B. W.; Benoist, Koen W.; Gunter, Willem H.; Vrahimis, George; October, Faith J.

    2012-06-01

    During the FATMOSE trial, held over the False Bay, South Africa) from November 2009 until October 2010, day and night (24/7) high resolution images were collected of point sources at a range of 15.7 km. Simultaneously, data were collected on atmospheric parameters, as relevant for the turbulence conditions: air- and sea temperature, windspeed, relative humidity and the structure parameter for refractive index: Cn 2. The data provide statistical information on the mean value and the variance of the atmospheric point spread function and the associated modulation transfer function during series of consecutive frames. This information allows the prediction of the range performance for a given sensor, target and atmospheric condition, which is of great importance for the user of optical sensors in related operational areas and for the developers of image processing algorithms. In addition the occurrence of "lucky shots" in series of frames is investigated: occasional frames with locally small blur spots. The simultaneously measured short exposure blur and the beam wander are compared with simultaneously collected scintillation data along the same path and the Cn 2 data from a locally installed scintillometer. By using two vertically separated sources, the correlation is determined between the beam wander in their images, providing information on the spatial extension of the atmospheric turbulence (eddy size). Examples are shown of the appearance of the blur spot, including skewness and astigmatism effects, which manifest themselves in the third moment of the spot and its distortion. An example is given of an experiment for determining the range performance for a given camera and a bar target on an outgoing boat in the False Bay.

  16. Fast and Precise 3D Fluorophore Localization based on Gradient Fitting

    NASA Astrophysics Data System (ADS)

    Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang

    2015-09-01

    Astigmatism imaging approach has been widely used to encode the fluorophore’s 3D position in single-particle tracking and super-resolution localization microscopy. Here, we present a new high-speed localization algorithm based on gradient fitting to precisely decode the 3D subpixel position of the fluorophore. This algebraic algorithm determines the center of the fluorescent emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the fluorophore in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising high-speed analyzing method for 3D particle tracking and super-resolution localization microscopy.

  17. Fast and Precise 3D Fluorophore Localization based on Gradient Fitting

    PubMed Central

    Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang

    2015-01-01

    Astigmatism imaging approach has been widely used to encode the fluorophore’s 3D position in single-particle tracking and super-resolution localization microscopy. Here, we present a new high-speed localization algorithm based on gradient fitting to precisely decode the 3D subpixel position of the fluorophore. This algebraic algorithm determines the center of the fluorescent emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the fluorophore in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising high-speed analyzing method for 3D particle tracking and super-resolution localization microscopy. PMID:26390959

  18. Dosimetric Analysis of 3D Image-Guided HDR Brachytherapy Planning for the Treatment of Cervical Cancer: Is Point A-Based Dose Prescription Still Valid in Image-Guided Brachytherapy?

    SciTech Connect

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M. Saiful

    2011-07-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 {+-} 4.3 Gy. This is significantly higher (p < 0.0001) than the mean value of the dose to Point A (78.6 {+-} 4.4 Gy). The dose levels of the OARs were within acceptable limits for most patients. The mean dose to 2 mL of bladder was 78.0 {+-} 6.2 Gy, whereas the mean dose to rectum and sigmoid were 57.2 {+-} 4.4 Gy and 66.9 {+-} 6.1 Gy, respectively. Image-based 3D brachytherapy provides adequate dose coverage to HRCTV, with acceptable dose to OARs in most patients. Dose to Point A was found to be significantly lower than the D90 for HRCTV calculated using the image-based technique. Paradigm shift from 2D point dose dosimetry to IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities.

  19. Dosimetric analysis of 3D image-guided HDR brachytherapy planning for the treatment of cervical cancer: is point A-based dose prescription still valid in image-guided brachytherapy?

    PubMed

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M Saiful

    2011-01-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 ± 4.3 Gy. This is significantly higher (p < 0.0001) than the mean value of the dose to Point A (78.6 ± 4.4 Gy). The dose levels of the OARs were within acceptable limits for most patients. The mean dose to 2 mL of bladder was 78.0 ± 6.2 Gy, whereas the mean dose to rectum and sigmoid were 57.2 ± 4.4 Gy and 66.9 ± 6.1 Gy, respectively. Image-based 3D brachytherapy provides adequate dose coverage to HRCTV, with acceptable dose to OARs in most patients. Dose to Point A was found to be significantly lower than the D90 for HRCTV calculated using the image-based technique. Paradigm shift from 2D point dose dosimetry to IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities. PMID:20488690

  20. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  1. Evaluating the effect of stellar multiplicity on the point spread function of space-based weak lensing surveys

    NASA Astrophysics Data System (ADS)

    Kuntzer, T.; Courbin, F.; Meylan, G.

    2016-02-01

    The next generation of space-based telescopes used for weak lensing surveys will require exquisite point spread function (PSF) determination. Previously negligible effects may become important in the reconstruction of the PSF, in part because of the improved spatial resolution. In this paper, we show that unresolved multiple star systems can affect the ellipticity and size of the PSF and that this effect is not cancelled even when using many stars in the reconstruction process. We estimate the error in the reconstruction of the PSF due to the binaries in the star sample both analytically and with image simulations for different PSFs and stellar populations. The simulations support our analytical finding that the error on the size of the PSF is a function of the multiple stars distribution and of the intrinsic value of the size of the PSF, i.e. if all stars were single. Similarly, the modification of each of the complex ellipticity components (e1,e2) depends on the distribution of multiple stars and on the intrinsic complex ellipticity. Using image simulations, we also show that the predicted error in the PSF shape is a theoretical limit that can be reached only if large number of stars (up to thousands) are used together to build the PSF at any desired spatial position. For a lower number of stars, the PSF reconstruction is worse. Finally, we compute the effect of binarity for different stellar magnitudes and show that bright stars alter the PSF size and ellipticity more than faint stars. This may affect the design of PSF calibration strategies and the choice of the related calibration fields.

  2. SU-E-T-157: Evaluation and Comparison of Doses to Pelvic Lymph Nodes and to Point B with 3D Image Guided Treatment Planning for High Dose Brachytherapy for Treatment of Cervical Cancer

    SciTech Connect

    Bhandare, N.

    2014-06-01

    Purpose: To estimate and compare the doses received by the obturator, external and internal iliac lymph nodes and point Methods: CT-MR fused image sets of 15 patients obtained for each of 5 fractions of HDR brachytherapy using tandem and ring applicator, were used to generate treatment plans optimized to deliver a prescription dose to HRCTV-D90 and to minimize the doses to organs at risk (OARs). For each set of image, target volume (GTV, HRCTV) OARs (Bladder, Rectum, Sigmoid), and both left and right pelvic lymph nodes (obturator, external and internal iliac lymph nodes) were delineated. Dose-volume histograms (DVH) were generated for pelvic nodal groups (left and right obturator group, internal and external iliac chains) Per fraction DVH parameters used for dose comparison included dose to 100% volume (D100), and dose received by 2cc (D2cc), 1cc (D1cc) and 0.1 cc (D0.1cc) of nodal volume. Dose to point B was compared with each DVH parameter using 2 sided t-test. Pearson correlation were determined to examine relationship of point B dose with nodal DVH parameters. Results: FIGO clinical stage varied from 1B1 to IIIB. The median pretreatment tumor diameter measured on MRI was 4.5 cm (2.7– 6.4cm).The median dose to bilateral point B was 1.20 Gy ± 0.12 or 20% of the prescription dose. The correlation coefficients were all <0.60 for all nodal DVH parameters indicating low degree of correlation. Only 2 cc of obturator nodes was not significantly different from point B dose on t-test. Conclusion: Dose to point B does not adequately represent the dose to any specific pelvic nodal group. When using image guided 3D dose-volume optimized treatment nodal groups should be individually identified and delineated to obtain the doses received by pelvic nodes.

  3. 3D resolution enhancement of deep-tissue imaging based on virtual spatial overlap modulation microscopy.

    PubMed

    Su, I-Cheng; Hsu, Kuo-Jen; Shen, Po-Ting; Lin, Yen-Yin; Chu, Shi-Wei

    2016-07-25

    During the last decades, several resolution enhancement methods for optical microscopy beyond diffraction limit have been developed. Nevertheless, those hardware-based techniques typically require strong illumination, and fail to improve resolution in deep tissue. Here we develop a high-speed computational approach, three-dimensional virtual spatial overlap modulation microscopy (3D-vSPOM), which immediately solves the strong-illumination issue. By amplifying only the spatial frequency component corresponding to the un-scattered point-spread-function at focus, plus 3D nonlinear value selection, 3D-vSPOM shows significant resolution enhancement in deep tissue. Since no iteration is required, 3D-vSPOM is much faster than iterative deconvolution. Compared to non-iterative deconvolution, 3D-vSPOM does not need a priori information of point-spread-function at deep tissue, and provides much better resolution enhancement plus greatly improved noise-immune response. This method is ready to be amalgamated with two-photon microscopy or other laser scanning microscopy to enhance deep-tissue resolution. PMID:27464077

  4. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  5. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV

  6. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  7. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  8. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  9. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  10. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  11. Mirrors for X-ray telescopes: Fresnel diffraction-based computation of point spread functions from metrology

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Spiga, D.

    2015-01-01

    Context. The imaging sharpness of an X-ray telescope is chiefly determined by the optical quality of its focusing optics, which in turn mostly depends on the shape accuracy and the surface finishing of the grazing-incidence X-ray mirrors that compose the optical modules. To ensure the imaging performance during the mirror manufacturing, a fundamental step is predicting the mirror point spread function (PSF) from the metrology of its surface. Traditionally, the PSF computation in X-rays is assumed to be different depending on whether the surface defects are classified as figure errors or roughness. This classical approach, however, requires setting a boundary between these two asymptotic regimes, which is not known a priori. Aims: The aim of this work is to overcome this limit by providing analytical formulae that are valid at any light wavelength, for computing the PSF of an X-ray mirror shell from the measured longitudinal profiles and the roughness power spectral density, without distinguishing spectral ranges with different treatments. Methods: The method we adopted is based on the Huygens-Fresnel principle for computing the diffracted intensity from measured or modeled profiles. In particular, we have simplified the computation of the surface integral to only one dimension, owing to the grazing incidence that reduces the influence of the azimuthal errors by orders of magnitude. The method can be extended to optical systems with an arbitrary number of reflections - in particular the Wolter-I, which is frequently used in X-ray astronomy - and can be used in both near- and far-field approximation. Finally, it accounts simultaneously for profile, roughness, and aperture diffraction. Results: We describe the formalism with which one can self-consistently compute the PSF of grazing-incidence mirrors, and we show some PSF simulations including the UV band, where the aperture diffraction dominates the PSF, and hard X-rays where the X-ray scattering has a major impact

  12. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  13. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  14. Parallel blind deconvolution of astronomical images based on the fractal energy ratio of the image and regularization of the point spread function

    NASA Astrophysics Data System (ADS)

    Jia, Peng; Cai, Dongmei; Wang, Dong

    2014-11-01

    A parallel blind deconvolution algorithm is presented. The algorithm contains the constraints of the point spread function (PSF) derived from the physical process of the imaging. Additionally, in order to obtain an effective restored image, the fractal energy ratio is used as an evaluation criterion to estimate the quality of the image. This algorithm is fine-grained parallelized to increase the calculation speed. Results of numerical experiments and real experiments indicate that this algorithm is effective.

  15. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  16. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  17. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  18. An open-source deconvolution software package for 3-D quantitative fluorescence microscopy imaging

    PubMed Central

    SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.

    2010-01-01

    Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications. PMID:19941558

  19. 3D imaging in volumetric scattering media using phase-space measurements.

    PubMed

    Liu, Hsiou-Yuan; Jonas, Eric; Tian, Lei; Zhong, Jingshan; Recht, Benjamin; Waller, Laura

    2015-06-01

    We demonstrate the use of phase-space imaging for 3D localization of multiple point sources inside scattering material. The effect of scattering is to spread angular (spatial frequency) information, which can be measured by phase space imaging. We derive a multi-slice forward model for homogenous volumetric scattering, then develop a reconstruction algorithm that exploits sparsity in order to further constrain the problem. By using 4D measurements for 3D reconstruction, the dimensionality mismatch provides significant robustness to multiple scattering, with either static or dynamic diffusers. Experimentally, our high-resolution 4D phase-space data is collected by a spectrogram setup, with results successfully recovering the 3D positions of multiple LEDs embedded in turbid scattering media. PMID:26072807

  20. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  1. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  2. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  3. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  4. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  5. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  6. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  7. FARGO3D: Hydrodynamics/magnetohydrodynamics code

    NASA Astrophysics Data System (ADS)

    Benítez Llambay, Pablo; Masset, Frédéric

    2015-09-01

    A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.

  8. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  9. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  10. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  11. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  12. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  13. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  14. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  15. Methods for comparing 3D surface attributes

    NASA Astrophysics Data System (ADS)

    Pang, Alex; Freeman, Adam

    1996-03-01

    A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.

  16. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  17. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  18. 3D-Measuring for Head Shape Covering Hair

    NASA Astrophysics Data System (ADS)

    Kato, Tsukasa; Hattori, Koosuke; Nomura, Takuya; Taguchi, Ryo; Hoguro, Masahiro; Umezaki, Taizo

    3D-Measuring is paid to attention because 3D-Display is making rapid spread. Especially, face and head are required to be measured because of necessary or contents production. However, it is a present problem that it is difficult to measure hair. Then, in this research, it is a purpose to measure face and hair with phase shift method. By using sine images arranged for hair measuring, the problems on hair measuring, dark color and reflection, are settled.

  19. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  20. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  1. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  2. 3D-GNOME: an integrated web service for structural modeling of the 3D genome

    PubMed Central

    Szalaj, Przemyslaw; Michalski, Paul J.; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz

    2016-01-01

    Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/. PMID:27185892

  3. 3D-GNOME: an integrated web service for structural modeling of the 3D genome.

    PubMed

    Szalaj, Przemyslaw; Michalski, Paul J; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz

    2016-07-01

    Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/. PMID:27185892

  4. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  6. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    NASA Astrophysics Data System (ADS)

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  7. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  8. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  9. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  10. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  11. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  12. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data.

    PubMed

    Spiegel, M; Redel, T; Struffert, T; Hornegger, J; Doerfler, A

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling. PMID:21908904

  13. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data

    NASA Astrophysics Data System (ADS)

    Spiegel, M.; Redel, T.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling.

  14. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  15. 3D measurement using circular gratings

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2013-09-01

    3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

  16. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  19. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-03-01

    Since the behaviour of proteins and biological molecules is tightly related to cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution. Since protein dynamics inside a cell involve all three dimensions, we developed an automated routine for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  20. 3D dosimetry fundamentals: gels and plastics

    NASA Astrophysics Data System (ADS)

    Lepage, M.; Jordan, K.

    2010-11-01

    Many different materials have been developed for 3D radiation dosimetry since the Fricke gel dosimeter was first proposed in 1984. This paper is intended as an entry point into these materials where we provide an overview of the basic principles for the most explored materials. References to appropriate sources are provided such that the reader interested in more details can quickly find relevant information.

  1. Computational optical-sectioning microscopy for 3D quantization of cell motion: results and challenges

    NASA Astrophysics Data System (ADS)

    McNally, James G.

    1994-09-01

    How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the

  2. Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach.

    PubMed

    Liu, Guozhong; Yousefi, Siavash; Zhi, Zhongwei; Wang, Ruikang K

    2011-09-12

    This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery. PMID:21935179

  3. Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach

    PubMed Central

    Liu, Guozhong; Yousefi, Siavash; Zhi, Zhongwei; Wang, Ruikang K.

    2011-01-01

    This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery. PMID:21935179

  4. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  6. 3D object retrieval using salient views.

    PubMed

    Atmosukarto, Indriyati; Shapiro, Linda G

    2013-06-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223-232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223-232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  7. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  8. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  9. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  10. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  11. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  12. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  13. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  14. Holography of incoherently illuminated 3D scenes

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.; Rosen, Joseph

    2008-04-01

    We review several methods of generating holograms of 3D realistic objects illuminated by incoherent white light. Using these methods, it is possible to obtain holograms with a simple digital camera, operating in regular light conditions. Thus, most disadvantages characterizing conventional holography, namely the need for a powerful, highly coherent laser and meticulous stability of the optical system are avoided. These holograms can be reconstructed optically by illuminating them with a coherent plane wave, or alternatively by using a digital reconstruction technique. In order to generate the proposed hologram, the 3D scene is captured from multiple points of view by a simple digital camera. Then, the acquired projections are digitally processed to yield the final hologram of the 3D scene. Based on this principle, we can generate Fourier, Fresnel, image or other types of holograms. To obtain certain advantages over the regular holograms, we also propose new digital holograms, such as modified Fresnel holograms and protected correlation holograms. Instead of shifting the camera mechanically to acquire a different projection of the 3D scene each time, it is possible to use a microlens array for acquiring the entire projections in a single camera shot. Alternatively, only the extreme projections can be acquired experimentally, while the middle projections are predicted digitally by using the view synthesis algorithm. The prospective goal of these methods is to facilitate the design of a simple, portable digital holographic camera which can be useful for a variety of practical applications.

  15. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  16. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  17. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  18. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  19. Rapid high-fidelity visualisation of multispectral 3D mapping

    NASA Astrophysics Data System (ADS)

    Tudor, Philip M.; Christy, Mark

    2011-06-01

    Mobile LIDAR scanning typically provides captured 3D data in the form of 3D 'Point Clouds'. Combined with colour imagery these data produce coloured point clouds or, if further processed, polygon-based 3D models. The use of point clouds is simple and rapid, but visualisation can appear ghostly and diffuse. Textured 3D models provide high fidelity visualisation, but their creation is time consuming, difficult to automate and can modify key terrain details. This paper describes techniques for the visualisation of fused multispectral 3D data that approach the visual fidelity of polygon-based models with the rapid turnaround and detail of 3D point clouds. The general approaches to data capture and data fusion are identified as well as the central underlying mathematical transforms, data management and graphics processing techniques used to support rapid, interactive visualisation of very large multispectral 3D datasets. Performance data with respect to real-world 3D mapping as well as illustrations of visualisation outputs are included.

  20. 3-D target-based distributed smart camera network localization.

    PubMed

    Kassebaum, John; Bulusu, Nirupama; Feng, Wu-Chi

    2010-10-01

    For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area. PMID:20679031

  1. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  2. On 3D instability of wake behind a cylinder

    NASA Astrophysics Data System (ADS)

    Uruba, Václav

    2016-06-01

    The canonical case of cross-flow behind prismatic circular cylinder is analyzed from the point of view of 3D instabilities appearance. Various flow conditions defined by various Reynolds number values are considered. All cases in question exhibit significant 3D features in close wake playing significant role in physical mechanisms of force generation.

  3. 3D face recognition based on a modified ICP method

    NASA Astrophysics Data System (ADS)

    Zhao, Kankan; Xi, Jiangtao; Yu, Yanguang; Chicharo, Joe F.

    2011-11-01

    3D face recognition technique has gained much more attention recently, and it is widely used in security system, identification system, and access control system, etc. The core technique in 3D face recognition is to find out the corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3 diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the corresponding points even the scales of probing image and reference image are different. 3D face images in our experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D database consists of 30 group images, three images with the same scale, which are from the same person with different views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups. The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the scales of probing image and referent image are different.

  4. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  5. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  6. Complex light in 3D printing

    NASA Astrophysics Data System (ADS)

    Moser, Christophe; Delrot, Paul; Loterie, Damien; Morales Delgado, Edgar; Modestino, Miguel; Psaltis, Demetri

    2016-03-01

    3D printing as a tool to generate complicated shapes from CAD files, on demand, with different materials from plastics to metals, is shortening product development cycles, enabling new design possibilities and can provide a mean to manufacture small volumes cost effectively. There are many technologies for 3D printing and the majority uses light in the process. In one process (Multi-jet modeling, polyjet, printoptical©), a printhead prints layers of ultra-violet curable liquid plastic. Here, each nozzle deposits the material, which is then flooded by a UV curing lamp to harden it. In another process (Stereolithography), a focused UV laser beam provides both the spatial localization and the photo-hardening of the resin. Similarly, laser sintering works with metal powders by locally melting the material point by point and layer by layer. When the laser delivers ultra-fast focused pulses, nonlinear effects polymerize the material with high spatial resolution. In these processes, light is either focused in one spot and the part is made by scanning it or the light is expanded and covers a wide area for photopolymerization. Hence a fairly "simple" light field is used in both cases. Here, we give examples of how "complex light" brings additional level of complexity in 3D printing.

  7. 3D-Printed Microfluidic Automation

    PubMed Central

    Au, Anthony K.; Bhattacharjee, Nirveek; Horowitz, Lisa F.; Chang, Tim C.; Folch, Albert

    2015-01-01

    Microfluidic automation – the automated routing, dispensing, mixing, and/or separation of fluids through microchannels – generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology’s use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer. PMID:25738695

  8. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  9. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  10. 3D FFTs on a Single FPGA

    PubMed Central

    Humphries, Benjamin; Zhang, Hansen; Sheng, Jiayi; Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    The 3D FFT is critical in many physical simulations and image processing applications. On FPGAs, however, the 3D FFT was thought to be inefficient relative to other methods such as convolution-based implementations of multi-grid. We find the opposite: a simple design, operating at a conservative frequency, takes 4μs for 163, 21μs for 323, and 215μs for 643 single precision data points. The first two of these compare favorably with the 25μs and 29μs obtained running on a current Nvidia GPU. Some broader significance is that this is a critical piece in implementing a large scale FPGA-based MD engine: even a single FPGA is capable of keeping the FFT off of the critical path for a large fraction of possible MD simulations. PMID:26594666

  11. Blind Depth-variant Deconvolution of 3D Data in Wide-field Fluorescence Microscopy.

    PubMed

    Kim, Boyoung; Naemura, Takeshi

    2015-01-01

    This paper proposes a new deconvolution method for 3D fluorescence wide-field microscopy. Most previous methods are insufficient in terms of restoring a 3D cell structure, since a point spread function (PSF) is simply assumed as depth-invariant, whereas a PSF of microscopy changes significantly along the optical axis. A few methods that consider a depth-variant PSF have been proposed; however, they are impractical, since they are non-blind approaches that use a known PSF in a pre-measuring condition, whereas an imaging condition of a target image is different from that of the pre-measuring. To solve these problems, this paper proposes a blind approach to estimate depth-variant specimen-dependent PSF and restore 3D cell structure. It is shown by experiments on that the proposed method outperforms the previous ones in terms of suppressing axial blur. The proposed method is composed of the following three steps: First, a non-parametric averaged PSF is estimated by the Richardson Lucy algorithm, whose initial parameter is given by the central depth prediction from intensity analysis. Second, the estimated PSF is fitted to Gibson's parametric PSF model via optimization, and depth-variant PSFs are generated. Third, a 3D cell structure is restored by using a depth-variant version of a generalized expectation-maximization. PMID:25950821

  12. 3D hyperpolarized He-3 MRI of ventilation using a multi-echo projection acquisition

    PubMed Central

    Holmes, James H.; O’Halloran, Rafael L.; Brodsky, Ethan K.; Jung, Youngkyoo; Block, Walter F.; Fain, Sean B.

    2010-01-01

    A method is presented for high resolution 3D imaging of the whole lung using inhaled hyperpolarized (HP) He-3 MR with multiple half-echo radial trajectories that can accelerate imaging through undersampling. A multiple half-echo radial trajectory can be used to reduce the level of artifact for undersampled 3D projection reconstruction (PR) imaging by increasing the amount of data acquired per unit time for HP He-3 lung imaging. The point spread functions (PSFs) for breath-held He-3 MRI using multiple half-echo trajectories were evaluated using simulations to predict the effects of T2* and gas diffusion on image quality. Results from PSF simulations were consistent with imaging results in volunteer studies showing improved image quality with increasing number of echoes using up to 8 half-echoes. The 8 half-echo acquisition is shown to accommodate lost breath-holds as short as 6 s using a retrospective reconstruction at reduced resolution as well as to allow reduced breath-hold time compared to an equivalent Cartesian trajectory. Furthermore, preliminary results from a 3D dynamic inhalation-exhalation maneuver are demonstrated using the 8 half-echo trajectory. Results demonstrate the first high resolution 3D PR imaging of ventilation and respiratory dynamics in humans using HP He-3 MR. PMID:18429034

  13. Unit cell geometry of 3-D braided structures

    NASA Technical Reports Server (NTRS)

    Du, Guang-Wu; Ko, Frank K.

    1993-01-01

    The traditional approach used in modeling of composites reinforced by three-dimensional (3-D) braids is to assume a simple unit cell geometry of a 3-D braided structure with known fiber volume fraction and orientation. In this article, we first examine 3-D braiding methods in the light of braid structures, followed by the development of geometric models for 3-D braids using a unit cell approach. The unit cell geometry of 3-D braids is identified and the relationship of structural parameters such as yarn orientation angle and fiber volume fraction with the key processing parameters established. The limiting geometry has been computed by establishing the point at which yarns jam against each other. Using this factor makes it possible to identify the complete range of allowable geometric arrangements for 3-D braided preforms. This identified unit cell geometry can be translated to mechanical models which relate the geometrical properties of fabric preforms to the mechanical responses of composite systems.

  14. 3D Modeling of Equatorial Plasma Bubbles

    NASA Astrophysics Data System (ADS)

    Huba, Joseph; Joyce, Glenn; Krall, Jonathan

    2011-10-01

    Post-sunset ionospheric irregularities in the equatorial F region were first observed by Booker and Wells (1938) using ionosondes. This phenomenon has become known as equatorial spread F (ESF). During ESF the equatorial ionosphere becomes unstable because of a Rayleigh-Taylor-like instability: large scale (10s km) electron density ``bubbles'' can develop and rise to high altitudes (1000 km or greater at times). Understanding and modeling ESF is important because of its impact on space weather: it causes radio wave scintillation that degrades communication and navigation systems. In fact, it is the focus of of the Air Force Communications/Navigation Outage Forecast Satellite (C/NOFS) mission. We will describe 3D simulation results from the NRL ionosphere models SAMI3 and SAMI3/ESF of this phenomenon. In particular, we will examine the causes of the day-to-day ariability of ESF which is an unresolved problem at this time. Post-sunset ionospheric irregularities in the equatorial F region were first observed by Booker and Wells (1938) using ionosondes. This phenomenon has become known as equatorial spread F (ESF). During ESF the equatorial ionosphere becomes unstable because of a Rayleigh-Taylor-like instability: large scale (10s km) electron density ``bubbles'' can develop and rise to high altitudes (1000 km or greater at times). Understanding and modeling ESF is important because of its impact on space weather: it causes radio wave scintillation that degrades communication and navigation systems. In fact, it is the focus of of the Air Force Communications/Navigation Outage Forecast Satellite (C/NOFS) mission. We will describe 3D simulation results from the NRL ionosphere models SAMI3 and SAMI3/ESF of this phenomenon. In particular, we will examine the causes of the day-to-day ariability of ESF which is an unresolved problem at this time. Research supported by ONR.

  15. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  16. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  17. Experimental Study of Electrothermal 3D Mixing using 3D microPIV

    NASA Astrophysics Data System (ADS)

    Kauffmann, Paul; Loire, Sophie; Meinhart, Carl; Mezic, Igor

    2012-11-01

    Mixing is a keystep which can greatly accelerate bio-reactions. For thirty years, dynamical system theory has predicted that chaotic mixing must involve at least 3 dimensions (either time dependent 2D flows or 3D flows). So far, 3D embedded chaotic mixing has been scarcely studied at microscale. In that regard, electrokinetics has emerged as an efficient embedded actuation to drive microflows. Physiological mediums can be driven by electrothermal flows generated by the interaction of an electric field with conductivity and permittivity gradients induced by Joule heating We present original electrothermal time dependant 3D (3D+1) mixing in microwells. The key point of our chaotic mixer is to generate overlapping asymmetric vortices, which switch periodically. When the two vortex configurations blink, flows stretch and fold, thereby generating chaotic advection. Each flow configuration is characterized by an original 3D PIV (3 Components / 3 Dimensions) based on the decomposition of the flows by Proper Orthogonal Decomposition. Velocity field distribution are then compared to COMSOL simulation and discussed. Mixing efficiency of low diffusive particles is studied using the mix-variance coefficient and shows a dramatic increase of mixing efficiency compared to steady flow.

  18. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  19. Accuracy of 3d Reconstruction in AN Illumination Dome

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay; Toschi, Isabella; Nocerino, Erica; Hess, Mona; Remondino, Fabio; Robson, Stuart

    2016-06-01

    The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.

  20. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  1. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  2. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  3. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  4. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  5. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  6. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  7. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  8. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  9. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  10. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  11. A new method for point-spread function correction using the ellipticity of re-smeared artificial images in weak gravitational lensing shear analysis

    SciTech Connect

    Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2014-09-10

    Highly accurate weak lensing analysis is urgently required for planned cosmic shear observations. For this purpose we have eliminated various systematic noises in the measurement. The point-spread function (PSF) effect is one of them. A perturbative approach for correcting the PSF effect on the observed image ellipticities has been previously employed. Here we propose a new non-perturbative approach for PSF correction that avoids the systematic error associated with the perturbative approach. The new method uses an artificial image for measuring shear which has the same ellipticity as the lensed image. This is done by re-smearing the observed galaxy images and observed star images (PSF) with an additional smearing function to obtain the original lensed galaxy images. We tested the new method with simple simulated objects that have Gaussian or Sérsic profiles smeared by a Gaussian PSF with sufficiently large size to neglect pixelization. Under the condition of no pixel noise, it is confirmed that the new method has no systematic error even if the PSF is large and has a high ellipticity.

  12. DETERMINATION OF THE POINT-SPREAD FUNCTION FOR THE FERMI LARGE AREA TELESCOPE FROM ON-ORBIT DATA AND LIMITS ON PAIR HALOS OF ACTIVE GALACTIC NUCLEI

    SciTech Connect

    Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Bloom, E. D.; Borgland, A. W.; Bottacini, E.; Buehler, R.; Asano, K.; Atwood, W. B.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Ballet, J.; Bastieri, D.; Bonamente, E.; Brandt, T. J.; Brigida, M.; Bruel, P. E-mail: mar0@uw.edu [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS and others

    2013-03-01

    The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to detect photons with energies from Almost-Equal-To 20 MeV to >300 GeV. The pre-launch response functions of the LAT were determined through extensive Monte Carlo simulations and beam tests. The point-spread function (PSF) characterizing the angular distribution of reconstructed photons as a function of energy and geometry in the detector is determined here from two years of on-orbit data by examining the distributions of {gamma} rays from pulsars and active galactic nuclei (AGNs). Above 3 GeV, the PSF is found to be broader than the pre-launch PSF. We checked for dependence of the PSF on the class of {gamma}-ray source and observation epoch and found none. We also investigated several possible spatial models for pair-halo emission around BL Lac AGNs. We found no evidence for a component with spatial extension larger than the PSF and set upper limits on the amplitude of halo emission in stacked images of low- and high-redshift BL Lac AGNs and the TeV blazars 1ES0229+200 and 1ES0347-121.

  13. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available. PMID:15619842

  14. Calculation of ocular single-pass modulation transfer function and retinal image simulation from measurements of the polarized double-pass ocular point spread function.

    PubMed

    Kobayashi, Katsuhiko; Shibutani, Masahiro; Takeuchi, Gaku; Ohnuma, Kazuhiko; Miyake, Yoichi; Negishi, Kazuno; Ohno, Kenji; Noda, Toru

    2004-01-01

    The single-pass modulation transfer function (MTF(sgl)) is an important numerical parameter that can help elucidate the performance and some processes of the human visual system. In previous studies, the MTF(sgl) was calculated from double-pass point spread function (PSF) measurements. These measurements include a depolarized reflection component from the retina that introduces a measurement artifact, and they require long acquisition times to allow averaging to reduce speckle. To solve these problems, we developed a new ocular PSF analysis system (PSFAS) that uses polarization optics to eliminate the depolarized retinal reflection component, and a rotating prism to increase measurement speed. Validation experiments on one patient showed that the MTF(sgl) measured by PSFAS agrees closely with the MTF calculated from contrast sensitivity measurements. A simulated retinal image was calculated by convolution of Landolt rings with the calculated single-pass PSF provided by the PSFAS. The contrast characteristic then was calculated from the simulated retinal images. These results indicate that the MTF(sgl) obtained using the PSFAS may be a reliable measure of visual performance of the optics of the eye, including the optical effects of the retina. The simulated retinal images and contrast characteristics are useful for evaluating visual performance. PMID:14715068

  15. 3D modeling of optically challenging objects.

    PubMed

    Park, Johnny; Kak, Avinash

    2008-01-01

    We present a system for constructing 3D models of real-world objects with optically challenging surfaces. The system utilizes a new range imaging concept called multi-peak range imaging, which stores multiple candidates of range measurements for each point on the object surface. The multiple measurements include the erroneous range data caused by various surface properties that are not ideal for structured-light range sensing. False measurements generated by spurious reflections are eliminated by applying a series of constraint tests. The constraint tests based on local surface and local sensor visibility are applied first to individual range images. The constraint tests based on global consistency of coordinates and visibility are then applied to all range images acquired from different viewpoints. We show the effectiveness of our method by constructing 3D models of five different optically challenging objects. To evaluate the performance of the constraint tests and to examine the effects of the parameters used in the constraint tests, we acquired the ground truth data by painting those objects to suppress the surface-related properties that cause difficulties in range sensing. Experimental results indicate that our method significantly improves upon the traditional methods for constructing reliable 3D models of optically challenging objects. PMID:18192707

  16. FELIX: a volumetric 3D laser display

    NASA Astrophysics Data System (ADS)

    Bahr, Detlef; Langhans, Knut; Gerken, Martin; Vogt, Carsten; Bezecny, Daniel; Homann, Dennis

    1996-03-01

    In this paper, an innovative approach of a true 3D image presentation in a space filling, volumetric laser display will be described. The introduced prototype system is based on a moving target screen that sweeps the display volume. Net result is the optical equivalent of a 3D array of image points illuminated to form a model of the object which occupies a physical space. Wireframe graphics are presented within the display volume which a group of people can walk around and examine simultaneously from nearly any orientation and without any visual aids. Further to the detailed vector scanning mode, a raster scanned system and a combination of both techniques are under development. The volumetric 3D laser display technology for true reproduction of spatial images can tremendously improve the viewers ability to interpret data and to reliably determine distance, shape and orientation. Possible applications for this development range from air traffic control, where moving blips of light represent individual aircrafts in a true to scale projected airspace of an airport, to various medical applications (e.g. electrocardiography, computer-tomography), to entertainment and education visualization as well as imaging in the field of engineering and Computer Aided Design.

  17. Geomatics for precise 3D breast imaging.

    PubMed

    Alto, Hilary

    2005-02-01

    Canadian women have a one in nine chance of developing breast cancer during their lifetime. Mammography is the most common imaging technology used for breast cancer detection in its earliest stages through screening programs. Clusters of microcalcifications are primary indicators of breast cancer; the shape, size and number may be used to determine whether they are malignant or benign. However, overlapping images of calcifications on a mammogram hinder the classification of the shape and size of each calcification and a misdiagnosis may occur resulting in either an unnecessary biopsy being performed or a necessary biopsy not being performed. The introduction of 3D imaging techniques such as standard photogrammetry may increase the confidence of the radiologist when making his/her diagnosis. In this paper, traditional analytical photogrammetric techniques for the 3D mathematical reconstruction of microcalcifications are presented. The techniques are applied to a specially designed and constructed x-ray transparent Plexiglas phantom (control object). The phantom was embedded with 1.0 mm x-ray opaque lead pellets configured to represent overlapping microcalcifications. Control points on the phantom were determined by standard survey methods and hand measurements. X-ray films were obtained using a LORAD M-III mammography machine. The photogrammetric techniques of relative and absolute orientation were applied to the 2D mammographic films to analytically generate a 3D depth map with an overall accuracy of 0.6 mm. A Bundle Adjustment and the Direct Linear Transform were used to confirm the results. PMID:15649085

  18. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-01

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices. PMID:27321137

  19. PLOT3D- DRAWING THREE DIMENSIONAL SURFACES

    NASA Technical Reports Server (NTRS)

    Canright, R. B.

    1994-01-01

    PLOT3D is a package of programs to draw three-dimensional surfaces of the form z = f(x,y). The function f and the boundary values for x and y are the input to PLOT3D. The surface thus defined may be drawn after arbitrary rotations. However, it is designed to draw only functions in rectangular coordinates expressed explicitly in the above form. It cannot, for example, draw a sphere. Output is by off-line incremental plotter or online microfilm recorder. This package, unlike other packages, will plot any function of the form z = f(x,y) and portrays continuous and bounded functions of two independent variables. With curve fitting; however, it can draw experimental data and pictures which cannot be expressed in the above form. The method used is division into a uniform rectangular grid of the given x and y ranges. The values of the supplied function at the grid points (x, y) are calculated and stored; this defines the surface. The surface is portrayed by connecting successive (y,z) points with straight-line segments for each x value on the grid and, in turn, connecting successive (x,z) points for each fixed y value on the grid. These lines are then projected by parallel projection onto the fixed yz-plane for plotting. This program has been implemented on the IBM 360/67 with on-line CDC microfilm recorder.

  20. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  1. Fallon FORGE 3D Geologic Model

    DOE Data Explorer

    Doug Blankenship

    2016-03-01

    An x,y,z scattered data file for the 3D geologic model of the Fallon FORGE site. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.

  2. 3D MHD Simulations of Tokamak Disruptions

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Stuber, James

    2014-10-01

    Two disruption scenarios are modeled numerically by use of the CORSICA 2D equilibrium and NIMROD 3D MHD codes. The work follows the simulations of pressure-driven modes in DIII-D and VDEs in ITER. The aim of the work is to provide starting points for simulation of tokamak disruption mitigation techniques currently in the CDR phase for ITER. Pressure-driven instability growth rates previously observed in simulations of DIIID are verified; Halo and Hiro currents produced during vertical displacements are observed in simulations of ITER with implementation of resistive walls in NIMROD. We discuss plans to exercise new code capabilities and validation.

  3. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  4. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  5. High resolution 3D fluorescence tomography using ballistic photons

    NASA Astrophysics Data System (ADS)

    Zheng, Jie; Nouizi, Farouk; Cho, Jaedu; Kwong, Jessica; Gulsen, Gultekin

    2015-03-01

    We are developing a ballistic-photon based approach for improving the spatial resolution of fluorescence tomography using time-domain measurements. This approach uses early photon information contained in measured time-of-fight distributions originating from fluorescence emission. The time point spread functions (TPSF) from both excitation light and emission light are acquired with gated single photon Avalanche detector (SPAD) and time-correlated single photon counting after a short laser pulse. To determine the ballistic photons for reconstruction, the lifetime of the fluorophore and the time gate from the excitation profiles will be used for calibration, and then the time gate of the fluorescence profile can be defined by a simple time convolution. By mimicking first generation CT data acquisition, the sourcedetector pair will translate across and also rotate around the subject. The measurement from each source-detector position will be reshaped into a histogram that can be used by a simple back-projection algorithm in order to reconstruct high resolution fluorescence images. Finally, from these 2D sectioning slides, a 3D inclusion can be reconstructed accurately. To validate the approach, simulation of light transport is performed for biological tissue-like media with embedded fluorescent inclusion by solving the diffusion equation with Finite Element Method using COMSOL Multiphysics simulation. The reconstruction results from simulation studies have confirmed that this approach drastically improves the spatial resolution of fluorescence tomography. Moreover, all the results have shown the feasibility of this technique for high resolution small animal imaging up to several centimeters.

  6. Microseismic network design assessment based on 3D ray tracing

    NASA Astrophysics Data System (ADS)

    Näsholm, Sven Peter; Wuestefeld, Andreas; Lubrano-Lavadera, Paul; Lang, Dominik; Kaschwich, Tina; Oye, Volker

    2016-04-01

    There is increasing demand on the versatility of microseismic monitoring networks. In early projects, being able to locate any triggers was considered a success. These early successes led to a better understanding of how to extract value from microseismic results. Today operators, regulators, and service providers work closely together in order to find the optimum network design to meet various requirements. In the current study we demonstrate an integrated and streamlined network capability assessment approach. It is intended for use during the microseismic network design process prior to installation. The assessments are derived from 3D ray tracing between a grid of event points and the sensors. Three aspects are discussed: 1) Magnitude of completeness or detection limit; 2) Event location accuracy; and 3) Ground-motion hazard. The network capability parameters 1) and 2) are estimated at all hypothetic event locations and are presented in the form of maps given a seismic sensor coordinate scenario. In addition, the ray tracing traveltimes permit to estimate the point-spread-functions (PSFs) at the event grid points. PSFs are useful in assessing the resolution and focusing capability of the network for stacking-based event location and imaging methods. We estimate the performance for a hypothetical network case with 11 sensors. We consider the well-documented region around the San Andreas Fault Observatory at Depth (SAFOD) located north of Parkfield, California. The ray tracing is done through a detailed velocity model which covers a 26.2 by 21.2 km wide area around the SAFOD drill site with a resolution of 200 m both for the P-and S-wave velocities. Systematic network capability assessment for different sensor site scenarios prior to installation facilitates finding a final design which meets the survey objectives.

  7. Point Spread Function and Transmittance Analyses for Conical and Hexapod Secondary Mirror Support Towers for the Next Generation Space Telescope (NGST)

    NASA Technical Reports Server (NTRS)

    Wilkerson, Gary W.; Pitalo, Stephen K.

    1999-01-01

    Different secondary mirror support towers were modeled on the CODE V optical design/analysis program for the NGST Optical Telescope Assembly (OTA) B. The vertices of the NGST OTA B primary and secondary mirrors were separated by close to 9.0 m. One type of tower consisted of a hollow cone 6.0 m long, 2.00 m in diameter at the base, and 0.704 m in diameter at its top. The base of the cone was considered attached to the primary's reaction structure through a hole in the primary. Extending up parallel to the optical axis from the top of this cylinder were eight blades (pyramidal struts) 3.0 m long. A cross section of each these long blades was an isosceles triangle with a base of 0.010 m and a height of 0.100 m with the sharpest part of each triangle pointing inward. The eight struts occurred every 45 deg. The other type of tower was purely a hexapod arrangement and had no blades or cones. The hexapod consisted simply of six, very thin, circular struts, leaving in pairs at 12:00, 4:00, and 8:00 at the primary and traversing to the outer edge of the back of the secondary mount. At this mount, two struts arrived at each of 10:00, 2:00, and 6:00. The struts were attached to the primary mirror in a ring 3.5 m in diameter. They reached the back of the secondary mount, a circle 0.704 m in diameter. Transmittance analyses at two levels were performed on the secondary mirror support towers. Detailed transmittances were accomplished by the use of the CODE V optical design/analysis program and were compared to transmittance calculations that were almost back-of-the-envelope. Point spread function (PSF) calculations, including both diffraction and aberration effects, were performed on CODE V. As one goes out from the center of the blur (for a point source), the two types of support towers showed little difference between their PSF intensities until one reaches about the 3% level. Contours can be delineated on CODE V down to about 10 (exp -8) times the peak intensity, fine

  8. Conjugate Point Equatorial Experiment (COPEX) campaign in Brazil: Electrodynamics highlights on spreadFdevelopment conditions and day-to-day variability

    NASA Astrophysics Data System (ADS)

    Abdu, M. A.; Batista, I. S.; Reinisch, B. W.; de Souza, J. R.; Sobral, J. H. A.; Pedersen, T. R.; Medeiros, A. F.; Schuch, N. J.; de Paula, E. R.; Groves, K. M.

    2009-04-01

    A Conjugate Point Equatorial Experiment (COPEX) campaign was conducted during the October-December 2002 period in Brazil, with the objective to investigate the equatorial spread F/plasma bubble irregularity (ESF) development conditions in terms of the electrodynamical state of the ionosphere along the magnetic flux tubes in which they occur. A network of instruments, including Digisondes, optical imagers, and GPS receivers, was deployed at magnetic conjugate and dip equatorial locations in a geometry that permitted field line mapping of the conjugate E layers to dip equatorial F layer bottomside. We analyze in this paper the extensive Digisonde data from the COPEX stations, complemented by limited all-sky imager conjugate point observations. The Sheffield University Plasmasphere-Ionosphere Model (SUPIM) is used to assess the transequatorial winds (TEW) as inferred from the observed difference of h m F 2 at the conjugate sites. New results and evidence on the ESF development conditions and the related ambient electrodynamic processes from this study can be highlighted as follows: (1) large-scale bottomside wave structures/satellite traces at the equator followed by their simultaneous appearance at conjugate sites are shown to be indicative of the ESF instability initiation; (2) the evening prereversal electric field enhancement (PRE)/vertical drift presents systematic control on the time delay in SF onset at off-equatorial sites indicative of the vertical bubble growth, under weak transequatorial wind; (3) the PRE presents a large latitude/height gradient in the Brazilian sector; (4) conjugate point symmetry/asymmetry of large-scale plasma depletions versus smaller-scale structures is revealed; and (5) while transequatorial winds seem to suppress ESF development in a case study, the medium-term trend in the ESF seems to be controlled more by the variation in the PRE than in the TEW during the COPEX period. Competing influences of the evening vertical plasma drift in

  9. 3-D Modeling of a Nearshore Dye Release

    NASA Astrophysics Data System (ADS)

    Maxwell, A. R.; Hibler, L. F.; Miller, L. M.

    2006-12-01

    The usage of computer modeling software in predicting the behavior of a plume discharged into deep water is well established. Nearfield plume spreading in coastal areas with complex bathymetry is less commonly studied; in addition to geometry, some of the difficulties of this environment include: tidal exchange, temperature, and salinity gradients. Although some researchers have applied complex hydrodynamic models to this problem, nearfield regions are typically modeled by calibration of an empirical or expert system model. In the present study, the 3D hydrodynamic model Delft3D-FLOW was used to predict the advective transport from a point release in Sequim Bay, Washington. A nested model approach was used, wherein a coarse model using a mesh extending to nearby tide gages (cell sizes up to 1 km) was run over several tidal cycles in order to provide boundary conditions to a smaller area. The nested mesh (cell sizes up to 30 m) was forced on two open boundaries using the water surface elevation derived from the coarse model. Initial experiments with the uncalibrated model were conducted in order to predict plume propagation based on the best available field data. Field experiments were subsequently carried out by releasing rhodamine dye into the bay at near-peak flood tidal current and near high slack tidal conditions. Surface and submerged releases were carried out from an anchored vessel. Concurrently collected data from the experiment include temperature, salinity, dye concentration, and hyperspectral imagery, collected from boats and aircraft. A REMUS autonomous underwater vehicle was used to measure current velocity and dye concentration at varying depths, as well as to acquire additional bathymetric information. Preliminary results indicate that the 3D hydrodynamic model offers a reasonable prediction of plume propagation speed and shape. A sensitivity analysis is underway to determine the significant factors in effectively using the model as a predictive tool

  10. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  11. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  12. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  13. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  14. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  15. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  16. 3D scanning modeling method application in ancient city reconstruction

    NASA Astrophysics Data System (ADS)

    Ren, Pu; Zhou, Mingquan; Du, Guoguang; Shui, Wuyang; Zhou, Pengbo

    2015-07-01

    With the development of optical engineering technology, the precision of 3D scanning equipment becomes higher, and its role in 3D modeling is getting more distinctive. This paper proposed a 3D scanning modeling method that has been successfully applied in Chinese ancient city reconstruction. On one hand, for the existing architectures, an improved algorithm based on multiple scanning is adopted. Firstly, two pieces of scanning data were rough rigid registered using spherical displacers and vertex clustering method. Secondly, a global weighted ICP (iterative closest points) method is used to achieve a fine rigid registration. On the other hand, for the buildings which have already disappeared, an exemplar-driven algorithm for rapid modeling was proposed. Based on the 3D scanning technology and the historical data, a system approach was proposed for 3D modeling and virtual display of ancient city.

  17. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  18. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  19. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  20. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  1. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  2. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  3. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  4. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  5. 3D-HST WFC3-SELECTED PHOTOMETRIC CATALOGS IN THE FIVE CANDELS/3D-HST FIELDS: PHOTOMETRY, PHOTOMETRIC REDSHIFTS, AND STELLAR MASSES

    SciTech Connect

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.; Van Dokkum, Pieter G.; Bezanson, Rachel; Leja, Joel; Nelson, Erica J.; Oesch, Pascal; Brammer, Gabriel B.; Labbé, Ivo; Franx, Marijn; Fumagalli, Mattia; Van der Wel, Arjen; Da Cunha, Elisabete; Maseda, Michael V.; Förster Schreiber, Natascha; Kriek, Mariska; Lundgren, Britt F.; Magee, Daniel; Marchesini, Danilo; and others

    2014-10-01

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin{sup 2} in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands, and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu)

  6. Restructuring of RELAP5-3D

    SciTech Connect

    George Mesina; Joshua Hykes

    2005-09-01

    The RELAP5-3D source code is unstructured with many interwoven logic flow paths. By restructuring the code, it becomes easier to read and understand, which reduces the time and money required for code development, debugging, and maintenance. A structured program is comprised of blocks of code with one entry and exit point and downward logic flow. IF tests and DO loops inherently create structured code, while GOTO statements are the main cause of unstructured code. FOR_STRUCT is a commercial software package that converts unstructured FORTRAN into structured programming; it was used to restructure individual subroutines. Primarily it transforms GOTO statements, ARITHMETIC IF statements, and COMPUTED GOTO statements into IF-ELSEIF-ELSE tests and DO loops. The complexity of RELAP5-3D complicated the task. First, FOR_STRUCT cannot completely restructure all the complex coding contained in RELAP5-3D. An iterative approach of multiple FOR_STRUCT applications gave some additional improvements. Second, FOR_STRUCT cannot restructure FORTRAN 90 coding, and RELAP5-3D is partially written in FORTRAN 90. Unix scripts for pre-processing subroutines into coding that FOR_STRUCT could handle and post-processing it back into FORTRAN 90 were written. Finally, FOR_STRUCT does not have the ability to restructure the RELAP5-3D code which contains pre-compiler directives. Variations of a file were processed with different pre-compiler options switched on or off, ensuring that every block of code was restructured. Then the variations were recombined to create a completely restructured source file. Unix scripts were written to perform these tasks, as well as to make some minor formatting improvements. In total, 447 files comprising some 180,000 lines of FORTRAN code were restructured. These showed significant reduction in the number of logic jumps contained as measured by reduction in the number of GOTO statements and line labels. The average number of GOTO statements per subroutine

  7. 3D surface defect analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Yang, B.; Jia, M.; Song, G. J.; Tao, L.; Harding, K. G.

    2008-08-01

    A method is proposed for surface defect analysis and evaluation. Good 3D point clouds can now be obtained through a variety of surface profiling methods such as stylus tracers, structured light, or interferometry. In order to inspect a surface for defects, first a reference surface that represents the surface without any defects needs to be identified. This reference surface can then be fit to the point cloud. The algorithm we present finds the least square solution for the overdetermined equation set to obtain the parameters of the reference surface mathematical description. The distance between each point within the point cloud and the reference surface is then calculated using to the derived reference surface equation. For analysis of the data, the user can preset a threshold distance value. If the calculated distance is bigger than the threshold value, the corresponding point is marked as a defect point. The software then generates a color-coded map of the measured surface. Defect points that are connected together are formed into a defect-clustering domain. Each defect-clustering domain is treated as one defect area. We then use a clustering domain searching algorithm to auto-search all the defect areas in the point cloud. The different critical parameters used for evaluating the defect status of a point cloud that can be calculated are described as: P-Depth,a peak depth of all defects; Defect Number, the number of surface defects; Defects/Area, the defect number in unit area; and Defect Coverage Ratio which is a ratio of the defect area to the region of interest.

  8. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  9. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  10. An Automated 3d Indoor Topological Navigation Network Modelling

    NASA Astrophysics Data System (ADS)

    Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.

    2015-10-01

    Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of