Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.
2016-01-01
Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312
NASA Astrophysics Data System (ADS)
Chun-jing, Xiao; Xin, Zhou; Jian-ping, Hu; Jian, Xie; Yun, Wang; Xue-fan, Guo
2013-09-01
In this paper, by analyzing variance and noise probability distribution of the reconstructed image, a point spreading situation of adjacent section under the conventional optical scanning holography (OSH) method and a new one based on random-phase encoding (RPE) are studied. Simulation results show that, compared to the conventional OSH at the same de-focus distance, the RPE-OSH can lead the distribution of reconstructed adjacent sections to be more homogeneous and manifested as a speckle-like pattern on the in-focus image. Further study shows that the complex amplitude probability distribution of adjacent section meets the shape of Gaussian curve, and noise can be filtered out by some corresponding filters. So aliasing and interference on the in-focus image can be removed.
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ramachandra, Ranjith M.; Panse, Ashish; Balla, Deepika; Yan, Jianhua; Carson, Richard E.
2010-04-01
We previously designed a component based 3-D PSF model to obtain a compact yet accurate system matrix for a dedicated human brain PET scanner. In this work, we adapted the model to a small animal PET scanner. Based on the model, we derived the system matrix for back-to-back gamma source in air, fluorine-18 and iodine-124 source in water by Monte Carlo simulation. The characteristics of the PSF model were evaluated and the performance of the newly derived system matrix was assessed by comparing its reconstructed images with the established reconstruction program provided on the animal PET scanner. The new system matrix showed strong PSF dependency on the line-of-response (LOR) incident angle and LOR depth. This confirmed the validity of the two components selected for the model. The effect of positron range on the system matrix was observed by comparing the PSFs of different isotopes. A simulated and an experimental hot-rod phantom study showed that the reconstruction with the proposed system matrix achieved better resolution recovery as compared to the algorithm provided by the manufacturer. Quantitative evaluation also showed better convergence to the expected contrast value at similar noise level. In conclusion, it has been shown that the system matrix derivation method is applicable to the animal PET system studied, suggesting that the method may be used for other PET systems and different isotope applications.
Can 3D Point Clouds Replace GCPs?
NASA Astrophysics Data System (ADS)
Stavropoulou, G.; Tzovla, G.; Georgopoulos, A.
2014-05-01
Over the past decade, large-scale photogrammetric products have been extensively used for the geometric documentation of cultural heritage monuments, as they combine metric information with the qualities of an image document. Additionally, the rising technology of terrestrial laser scanning has enabled the easier and faster production of accurate digital surface models (DSM), which have in turn contributed to the documentation of heavily textured monuments. However, due to the required accuracy of control points, the photogrammetric methods are always applied in combination with surveying measurements and hence are dependent on them. Along this line of thought, this paper explores the possibility of limiting the surveying measurements and the field work necessary for the production of large-scale photogrammetric products and proposes an alternative method on the basis of which the necessary control points instead of being measured with surveying procedures are chosen from a dense and accurate point cloud. Using this point cloud also as a surface model, the only field work necessary is the scanning of the object and image acquisition, which need not be subject to strict planning. To evaluate the proposed method an algorithm and the complementary interface were produced that allow the parallel manipulation of 3D point clouds and images and through which single image procedures take place. The paper concludes by presenting the results of a case study in the ancient temple of Hephaestus in Athens and by providing a set of guidelines for implementing effectively the method.
3D scene reconstruction based on 3D laser point cloud combining UAV images
NASA Astrophysics Data System (ADS)
Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen
2016-03-01
It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.
Alignment of continuous video onto 3D point clouds.
Zhao, Wenyi; Nister, David; Hsu, Steve
2005-08-01
We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.
Point Cloud Visualization in AN Open Source 3d Globe
NASA Astrophysics Data System (ADS)
De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.
2011-09-01
During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
3D Building Reconstruction Using Dense Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.
2016-06-01
Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.
Automatic 3-D Point Cloud Classification of Urban Environments
2008-12-01
paper, we address the problem of automated interpretation of 3-D point clouds from scenes of urban and natural environments; our analysis is...over 10 km of traverse. We implemented three geometric features com- monly used in spectral analysis of point clouds . We de- fine λ2 ≥ λ1 ≥ λ0 to be
Mirror Identification and Correction of 3d Point Clouds
NASA Astrophysics Data System (ADS)
Käshammer, P.-F.; Nüchter, A.
2015-02-01
In terrestrial laser scanning (TLS), the surface geometry of objects is scanned by laser beams and recorded digitally. This produces a discrete set of scan points, commonly referred to as a point cloud. The coordinates of the scan points are determined by measuring the angles and the time-of-flight relative to the origin (scanner position). However, if it comes to mirror surfaces laser beams are fully reflected, due to the high reflectivity. Mirrors do not appear in the point cloud at all. Instead, for every reflected beam, a incorrect scan point is created behind the actual mirror plane. Consequently, problems arise in multiple derived application fields such as 3D virtual reconstruction of complex architectures. The paper presents a new approach to automatically detect framed rectangular mirrors with known dimensions and to correct the 3D point cloud, using the calculated mirror plane.
The medial scaffold of 3D unorganized point clouds.
Leymarie, Frederic F; Kimia, Benjamin B
2007-02-01
We introduce the notion of the medial scaffold, a hierarchical organization of the medial axis of a 3D shape in the form of a graph constructed from special medial curves connecting special medial points. A key advantage of the scaffold is that it captures the qualitative aspects of shape in a hierarchical and tightly condensed representation. We propose an efficient and exact method for computing the medial scaffold based on a notion of propagation along the scaffold itself, starting from initial sources of the flow and constructing the scaffold during the propagation. We examine this method specifically in the context of an unorganized cloud of points in 3D, e.g., as obtained from laser range finders, which typically involve hundreds of thousands of points, but the ideas are generalizable to data arising from geometrically described surface patches. The computational bottleneck in the propagation-based scheme is in finding the initial sources of the flow. We thus present several ideas to avoid the unnecessary consideration of pairs of points which cannot possibly form a medial point source, such as the "visibility" of a point from another given a third point and the interaction of clusters of points. An application of using the medial scaffold for the representation of point samplings of real-life objects is also illustrated.
3-D Object Recognition from Point Cloud Data
NASA Astrophysics Data System (ADS)
Smith, W.; Walker, A. S.; Zhang, B.
2011-09-01
The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
NASA Astrophysics Data System (ADS)
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
Effects of point configuration on the accuracy in 3D reconstruction from biplane images
Dmochowski, Jacek; Hoffmann, Kenneth R.; Singh, Vikas; Xu Jinhui; Nazareth, Daryl P.
2005-09-15
increases, as would be intuitively expected, and shapes with larger spread, such as spherical shapes, yield more accurate reconstructions. These results are in agreement with an analysis of the solution volume of feasible geometries and could be used to guide selection of points for reconstruction of 3D configurations from two views.
Performance testing of 3D point cloud software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-10-01
LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.
Comparison of 3D interest point detectors and descriptors for point cloud fusion
NASA Astrophysics Data System (ADS)
Hänsch, R.; Weber, T.; Hellwich, O.
2014-08-01
The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.
Measuring 3D point configurations in pictorial space
Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J
2011-01-01
We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications. PMID:23145227
Observation of a 3D Magnetic Null Point
NASA Astrophysics Data System (ADS)
Romano, P.; Falco, M.; Guglielmino, S. L.; Murabito, M.
2017-03-01
We describe high-resolution observations of a GOES B-class flare characterized by a circular ribbon at the chromospheric level, corresponding to the network at the photospheric level. We interpret the flare as a consequence of a magnetic reconnection event that occurred at a three-dimensional (3D) coronal null point located above the supergranular cell. The potential field extrapolation of the photospheric magnetic field indicates that the circular chromospheric ribbon is cospatial with the fan footpoints, while the ribbons of the inner and outer spines look like compact kernels. We found new interesting observational aspects that need to be explained by models: (1) a loop corresponding to the outer spine became brighter a few minutes before the onset of the flare; (2) the circular ribbon was formed by several adjacent compact kernels characterized by a size of 1″–2″ (3) the kernels with a stronger intensity emission were located at the outer footpoint of the darker filaments, departing radially from the center of the supergranular cell; (4) these kernels started to brighten sequentially in clockwise direction; and (5) the site of the 3D null point and the shape of the outer spine were detected by RHESSI in the low-energy channel between 6.0 and 12.0 keV. Taking into account all these features and the length scales of the magnetic systems involved in the event, we argue that the low intensity of the flare may be ascribed to the low amount of magnetic flux and to its symmetric configuration.
Asymmetric effects at 3D Ising-like critical points
NASA Astrophysics Data System (ADS)
Tsypin, M.
2003-05-01
The Standard Model of electroweak interactions has a line of first order phase transition in the plane (higgs mass, temperature) that ends in a critical point belonging to the 3D Ising model universality class [K. Rummukainen et al, hep-lat/9805013. Similar critical points are found in finite-temperature QCD [M. Stephanov et al, hep-ph/9806219; F. Karsch et al, hep-lat/0107020. When these critical points are studied by Monte Carlo simulations on the lattice, one observes certain residual deviations from Z2 symmetry (which is exact for the Ising model). Here we study whether such deviations can be attributed to asymmetric corrections to scaling, which are relatively poorly studied. We compute the critical exponents in the local potential approximation (LPA), that is, in the framework of the Wegner-Houghton equation. We find that the exponent for the leading antisymmetric correction to scaling is approximately 1.691 in the LPA. This high value implies that such corrections cannot explain observed asymmetries.
Multiframe image point matching and 3-d surface reconstruction.
Tsai, R Y
1983-02-01
This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.
Petroll, W. Matthew; Ma, Lisha; Kim, Areum; Ly, Linda; Vishwanath, Mridula
2009-01-01
The goal of this study was to determine the morphological and sub-cellular mechanical effects of Rac activation on fibroblasts within 3-D collagen matrices. Corneal fibroblasts were plated at low density inside 100 μm thick fibrillar collagen matrices and cultured for 1 to 2 days in serum-free media. Time-lapse imaging was then performed using Nomarski DIC. After an acclimation period, perfusion was switched to media containing PDGF. In some experiments, Y-27632 or blebbistatin were used to inhibit Rho-kinase (ROCK) or myosin II, respectively. PDGF activated Rac and induced cell spreading, which resulted in an increase in cell length, cell area, and the number of pseudopodial processes. Tractional forces were generated by extending pseudopodia, as indicated by centripetal displacement and realignment of collagen fibrils. Interestingly, the pattern of pseudopodial extension and local collagen fibril realignment was highly dependent upon the initial orientation of fibrils at the leading edge. Following ROCK or myosin II inhibition, significant ECM relaxation was observed, but small displacements of collagen fibrils continued to be detected at the tips of pseudopodia. Taken together, the data suggests that during Rac-induced cell spreading within 3-D matrices, there is a shift in the distribution of forces from the center to the periphery of corneal fibroblasts. ROCK mediates the generation of large myosin II-based tractional forces during cell spreading within 3-D collagen matrices, however residual forces can be generated at the tips of extending pseudopodia that are both ROCK and myosin II-independent. PMID:18452153
Miron-Mendoza, Miguel; Lin, Xihui; Ma, Lisha; Ririe, Peter; Petroll, W Matthew
2012-06-01
Extracellular matrix (ECM) supplies both physical and chemical signals to cells and provides a substrate through which fibroblasts migrate during wound repair. To directly assess how ECM composition regulates this process, we used a nested 3D matrix model in which cell-populated collagen buttons were embedded in cell-free collagen or fibrin matrices. Time-lapse microscopy was used to record the dynamic pattern of cell migration into the outer matrices, and 3D confocal imaging was used to assess cell connectivity and cytoskeletal organization. Corneal fibroblasts stimulated with PDGF migrated more rapidly into collagen as compared to fibrin. In addition, the pattern of fibroblast migration into fibrin and collagen ECMs was strikingly different. Corneal fibroblasts migrating into collagen matrices developed dendritic processes and moved independently, whereas cells migrating into fibrin matrices had a more fusiform morphology and formed an interconnected meshwork. A similar pattern was observed when using dermal fibroblasts, suggesting that this response is not unique to corneal cells. We next cultured corneal fibroblasts within and on top of standard collagen and fibrin matrices to assess the impact of ECM composition on the cell spreading response. Similar differences in cell morphology and connectivity were observed – cells remained separated on collagen but coalesced into clusters on fibrin. Cadherin was localized to junctions between interconnected cells, whereas fibronectin was present both between cells and at the tips of extending cell processes. Cells on fibrin matrices also developed more prominent stress fibers than those on collagen matrices. Importantly, these spreading and migration patterns were consistently observed on both rigid and compliant substrates, thus differences in ECM mechanical stiffness were not the underlying cause. Overall, these results demonstrate for the first time that ECM protein composition alone (collagen vs. fibrin) can induce
Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups
ERIC Educational Resources Information Center
Casas, Lluís; Estop, Euge`nia
2015-01-01
Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…
The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models
NASA Astrophysics Data System (ADS)
Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain
2014-05-01
The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail
NASA Astrophysics Data System (ADS)
Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen
2016-06-01
Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.
NASA Astrophysics Data System (ADS)
Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.
2017-02-01
3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.
NASA Astrophysics Data System (ADS)
Lague, D.; Brodu, N.; Leroux, J.
2012-12-01
Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi
Filtering method for 3D laser scanning point cloud
NASA Astrophysics Data System (ADS)
Liu, Da; Wang, Li; Hao, Yuncai; Zhang, Jun
2015-10-01
In recent years, with the rapid development of the hardware and software of the three-dimensional model acquisition, three-dimensional laser scanning technology is utilized in various aspects, especially in space exploration. The point cloud filter is very important before using the data. In the paper, considering both the processing quality and computing speed, an improved mean-shift point cloud filter method is proposed. Firstly, by analyze the relevance of the normal vector between the upcoming processing point and the near points, the iterative neighborhood of the mean-shift is selected dynamically, then the high frequency noise is constrained. Secondly, considering the normal vector of the processing point, the normal vector is updated. Finally, updated position is calculated for each point, then each point is moved in the normal vector according to the updated position. The experimental results show that the large features are retained, at the same time, the small sharp features are also existed for different size and shape of objects, so the target feature information is protected precisely. The computational complexity of the proposed method is not high, it can bring high precision results with fast speed, so it is very suitable for space application. It can also be utilized in civil, such as large object measurement, industrial measurement, car navigation etc. In the future, filter with the help of point strength will be further exploited.
Image reconstruction with analytical point spread functions
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; López Ariste, A.
2010-07-01
Context. The image degradation produced by atmospheric turbulence and optical aberrations is usually alleviated using post-facto image reconstruction techniques, even when observing with adaptive optics systems. Aims: These techniques rely on the development of the wavefront using Zernike functions and the non-linear optimization of a certain metric. The resulting optimization procedure is computationally heavy. Our aim is to alleviate this computational burden. Methods: We generalize the extended Zernike-Nijboer theory to carry out the analytical integration of the Fresnel integral and present a natural basis set for the development of the point spread function when the wavefront is described using Zernike functions. Results: We present a linear expansion of the point spread function in terms of analytic functions, which, in addition, takes defocusing into account in a natural way. This expansion is used to develop a very fast phase-diversity reconstruction technique, which is demonstrated in terms of some applications. Conclusions: We propose that the linear expansion of the point spread function can be applied to accelerate other reconstruction techniques in use that are based on blind deconvolution.
Extracting Feature Points of the Human Body Using the Model of a 3D Human Body
NASA Astrophysics Data System (ADS)
Shin, Jeongeun; Ozawa, Shinji
The purpose of this research is to recognize 3D shape features of a human body automatically using a 3D laser-scanning machine. In order to recognize the 3D shape features, we selected the 23 feature points of a body and modeled its 3D features. The set of 23 feature points consists of the motion axis of a joint, the main point for the bone structure of a human body. For extracting feature points of object model, we made 2.5D templates neighbor for each feature points were extracted according to the feature points of the standard model of human body. And the feature points were extracted by the template matching. The extracted feature points can be applied as body measurement, the 3D virtual fitting system for apparel etc.
Numerical 3D models support two distinct hydrothermal circulation systems at fast spreading ridges
NASA Astrophysics Data System (ADS)
Hasenclever, Jörg; Theissen-Krah, Sonja; Rüpke, Lars
2013-04-01
We present 3D numerical calculations of hydrothermal fluid flow at fast spreading ridges. The setup of the 3D models is based our previous 2D studies, in which we have coupled numerical models for crustal accretion and hydrothermal fluid flow. One result of these calculations is a crustal permeability field that leads to a thermal structure in the crust that matches seismic tomography data of the East Pacific Rise (EPR). The 1000°C isotherm obtained from the 2D results is now used as the lower boundary of the 3D model domain, while the upper boundary is a smoothed bathymetry of the EPR. The same permeability field as in the 2D models is used, with the highest permeability at the ridge axis and a decrease with both depth and distance to the ridge. Permeability is also reduced linearly between 600 and 1000°C. Using a newly developed parallel finite element code written in Matlab that solves for thermal evolution, fluid pressure and Darcy flow, we simulate the flow patterns of hydrothermal circulation in a segment of 5000m along-axis, 10000m across-axis and up to 5000m depth. We observe two distinct hydrothermal circulation systems: An on-axis system forming a series of vents with a spacing ranging from 100 to 500m that is recharged by nearby (100-200m) downflows on both sides of the ridge axis. Simultaneously a second system with much broader extensions both laterally and vertically exists off-axis. It is recharged by fluids intruding between 1500m to 5000m off-axis and sampling both upper and lower crust. These fluids are channeled in the deepest and hottest regions with high permeability and migrate up-slope following the 600°C isotherm until reaching the edge of the melt lens. Depending on the width of the melt lens these off-axis fluids either merge with the on-axis hydrothermal system or form separate vents. We observe separate off-axis vent fields if the magma lens half-width exceeds 1000m and confluence of both systems for half-widths smaller than 500m. For
Point spread function engineering with multiphoton SPIFI
NASA Astrophysics Data System (ADS)
Wernsing, Keith A.; Field, Jeffrey J.; Domingue, Scott R.; Allende-Motz, Alyssa M.; DeLuca, Keith F.; Levi, Dean H.; DeLuca, Jennifer G.; Young, Michael D.; Squier, Jeff A.; Bartels, Randy A.
2016-03-01
MultiPhoton SPatIal Frequency modulated Imaging (MP-SPIFI) has recently demonstrated the ability to simultaneously obtain super-resolved images in both coherent and incoherent scattering processes -- namely, second harmonic generation and two-photon fluorescence, respectively.1 In our previous analysis, we considered image formation produced by the zero and first diffracted orders from the SPIFI modulator. However, the modulator is a binary amplitude mask, and therefore produces multiple diffracted orders. In this work, we extend our analysis to image formation in the presence of higher diffracted orders. We find that tuning the mask duty cycle offers a measure of control over the shape of super-resolved point spread functions in an MP-SPIFI microscope.
3D point cloud registration based on the assistant camera and Harris-SIFT
NASA Astrophysics Data System (ADS)
Zhang, Yue; Yu, HongYang
2016-07-01
3D(Three-Dimensional) point cloud registration technology is the hot topic in the field of 3D reconstruction, but most of the registration method is not real-time and ineffective. This paper proposes a point cloud registration method of 3D reconstruction based on Harris-SIFT and assistant camera. The assistant camera is used to pinpoint mobile 3D reconstruction device, The feature points of images are detected by using Harris operator, the main orientation for each feature point is calculated, and lastly, the feature point descriptors are generated after rotating the coordinates of the descriptors relative to the feature points' main orientations. Experimental results of demonstrate the effectiveness of the proposed method.
Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera
NASA Astrophysics Data System (ADS)
Kim, H.; Yoon, W.; Kim, T.
2016-06-01
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
Fast Probabilistic Fusion of 3d Point Clouds via Occupancy Grids for Scene Classification
NASA Astrophysics Data System (ADS)
Kuhn, Andreas; Huang, Hai; Drauschke, Martin; Mayer, Helmut
2016-06-01
High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.
Automated detection of planes in 3-D point clouds using fast Hough transforms
NASA Astrophysics Data System (ADS)
Ogundana, Olatokunbo O.; Coggrave, C. Russell; Burguete, Richard L.; Huntley, Jonathan M.
2011-05-01
Calibration of 3-D optical sensors often involves the use of calibration artifacts consisting of geometric features, such as 2 or more planes or spheres of known separation. In order to reduce data processing time and minimize user input during calibration, the respective features of the calibration artifact need to be automatically detected and labeled from the measured point clouds. The Hough transform (HT), which is a well-known method for line detection based on foot-of-normal parameterization, has been extended to plane detection in 3-D space. However, the typically sparse intermediate 3-D Hough accumulator space leads to excessive memory storage requirements. A 3-D HT method based on voting in an optimized sparse 3-D matrix model and efficient peak detection in Hough space is described. An alternative 1-D HT is also investigated for rapid detection of nominally parallel planes. Examples of the performance of these methods using simulated and experimental shape data are presented.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
Convergence of the point vortex method for the 3-D Euler equations
NASA Astrophysics Data System (ADS)
Hou, Thomas Y.; Lowengrub, John
1990-11-01
Consistency, stability, and convergence of a point vortex approximation to the 3-D incompressible Euler equations with smooth solutions. The 3-D algorithm considered is similar to the corresponding 3-D vortex are proved blob algorithm introduced by Beale and Majda; The discretization error is second-order accurate. Then the method is stable in l sup p norm for the particle trajectories and in w sup -1,p norm for discrete vorticity. Consequently, the method converges up to any time for which the Euler equations have a smooth solution. One immediate application of the convergence result is that the vortex filament method without smoothing also converges.
The Effect of Dissipation Mechanism on X-line Spreading in 3D Magnetic
NASA Astrophysics Data System (ADS)
Shepherd, L. S.; Cassak, P.; Phan, T.; Shay, M. A.; Gosling, J. T.
2012-12-01
Naturally occurring magnetic reconnection generally begins in a spatially localized region and spreads in the direction perpendicular to the reconnection plane as time progresses. Reconnection spreading is associated with dawn-dusk asymmetries during substorms in the magnetotail and has been observed in two-ribbon flares (such as the Bastille Day flare) and laboratory experiments at the Versatile Toroidal Facility (VTF) and the Magnetic Reconnection eXperiment (MRX). It was suggested that X-line spreading is necessary to explain the existence of X-lines extending more than 390 Earth radii (Phan et al., Nature, 404, 848, 2006). Previous numerical studies exploring the spreading of localized magnetic reconnection exclusively addressed collisionless (Hall) reconnection. Here, we address the effect of dissipation mechanism has on X-line spreading with and without a guide field. We compare previous results with simulations using three alternate phases of reconnection - Sweet-Parker reconnection, collisional reconnection with secondary islands, and reconnection with anomalous resistivity. We present results from three-dimensional resistive magnetohydrodynamic numerical simulations to address the nature of X-line spreading. Applications to reconnection in the solar wind and corona will be discussed.
Human body 3D posture estimation using significant points and two cameras.
Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin
2014-01-01
This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.
NASA Astrophysics Data System (ADS)
Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard
2010-04-01
The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.
Mitton, D; Landry, C; Véron, S; Skalli, W; Lavaste, F; De Guise, J A
2000-03-01
Standard 3D reconstruction of bones using stereoradiography is limited by the number of anatomical landmarks visible in more than one projection. The proposed technique enables the 3D reconstruction of additional landmarks that can be identified in only one of the radiographs. The principle of this method is the deformation of an elastic object that respects stereocorresponding and non-stereocorresponding observations available in different projections. This technique is based on the principle that any non-stereocorresponding point belongs to a line joining the X-ray source and the projection of the point in one view. The aim is to determine the 3D position of these points on their line of projection when submitted to geometrical and topological constraints. This technique is used to obtain the 3D geometry of 18 cadaveric upper cervical vertebrae. The reconstructed geometry obtained is compared with direct measurements using a magnetic digitiser. The order of precision determined with the point-to-surface distance between the reconstruction obtained with that technique and reference measurements is about 1 mm, depending on the vertebrae studied. Comparison results indicate that the obtained reconstruction is close to the actual vertebral geometry. This method can therefore be proposed to obtain the 3D geometry of vertebrae.
3-D Printers Spread from Engineering Departments to Designs across Disciplines
ERIC Educational Resources Information Center
Chen, Angela
2012-01-01
The ability to print a 3-D object may sound like science fiction, but it has been around in some form since the 1980s. Also called rapid prototyping or additive manufacturing, the idea is to take a design from a computer file and forge it into an object, often in flat cross-sections that can be assembled into a larger whole. While the printer on…
3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models
NASA Astrophysics Data System (ADS)
Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.
2013-07-01
Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a
Automatic pole-like object modeling via 3D part-based analysis of point cloud
NASA Astrophysics Data System (ADS)
He, Liu; Yang, Haoxiang; Huang, Yuchun
2016-10-01
Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.
NASA Astrophysics Data System (ADS)
Wang, Yu-Feng; Xu, Qiang; Cheng, Qian-Gong; Li, Yan; Luo, Zhong-Xu
2016-11-01
Aiming to understand the propagation and deposit behaviours of a granular avalanche along a 3D complex basal terrain, a new 3D experimental platform in 1/400 scale was developed according to the natural terrain of the Xiejiadianzi rock avalanche, with a series of laboratory experiments being conducted. Through the conduction of these tests, parameters, including the morphological evolution of sliding mass, run-outs and velocities of surficial particles, thickness contour and centre of final deposit, equivalent frictional coefficient, and energy dissipation, are documented and analysed, with the geomorphic control effect, material grain size effect, drop angle effect, and drop distance effect on rock avalanche mobility being discussed primarily. From the study, some interesting conclusions for a better understanding of rock avalanche along a 3D complex basal topography are reached. (1) For the granular avalanche tested in this study, great differences between the evolutions of the debris along the right and left branch valleys were observed, with an obvious geomorphic control effect on avalanche mobility presented. In addition, some other interesting features, including groove-like trough and superelevation, were also observed under the control of the topographic interferences. (2) The equivalent frictional coefficients of the granular avalanches tested here range from 0.48 to 0.57, which is lower than that reached with a set-up composed of an inclined chute and horizontal plate and higher than that reached using a set-up composed of only an inclined chute. And the higher the drop angle and fine particle content, the higher the equivalent frictional coefficient. The effect of drop distance on avalanche mobility is minor. (3) For a granular avalanche, momentum transfer plays an important role in the motion of mass, which can accelerate the mobility of the front part greatly through delivering the kinetic energy of the rear part to the front.
Feature relevance assessment for the semantic interpretation of 3D point cloud data
NASA Astrophysics Data System (ADS)
Weinmann, M.; Jutzi, B.; Mallet, C.
2013-10-01
The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.
NASA Astrophysics Data System (ADS)
Dahlke, D.; Linkiewicz, M.
2016-06-01
This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority
Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.
2016-06-01
Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.
Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.
Phase-Scrambler Plate Spreads Point Image
NASA Technical Reports Server (NTRS)
Edwards, Oliver J.; Arild, Tor
1992-01-01
Array of small prisms retrofit to imaging lens. Phase-scrambler plate essentially planar array of small prisms partitioning aperture of lens into many subapertures, and prism at each subaperture designed to divert relatively large diffraction spot formed by that subaperture to different, specific point on focal plane.
Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data
NASA Astrophysics Data System (ADS)
Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert
2014-05-01
A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal
Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System
NASA Astrophysics Data System (ADS)
Aoki, K.; Yamamoto, K.; Shimamura, H.
2012-07-01
This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.
Quality of 3d Point Clouds from Highly Overlapping Uav Imagery
NASA Astrophysics Data System (ADS)
Haala, N.; Cramer, M.; Rothermel, M.
2013-08-01
UAVs are becoming standard platforms for photogrammetric data capture especially while aiming at large scale aerial mapping for areas of limited extent. Such applications especially benefit from the very reasonable price of a small light UAS including control system and standard consumer grade digital camera, which is some orders of magnitude lower compared to digital photogrammetric systems. Within the paper the capability of UAV-based data collection will be evaluated for two different consumer camera systems and compared to an aerial survey with a state-of-the-art digital airborne camera system. During this evaluation, the quality of 3D point clouds generated by dense multiple image matching will be used as a benchmark. Also due to recent software developments such point clouds can be generated at a resolution similar to the ground sampling distance of the available imagery and are used for an increasing number of applications. Usually, image matching benefits from the good images quality as provided from digital airborne camera systems, which is frequently not available from the low-cost sensor components used for UAV image collection. Within the paper an investigation on UAV-based 3D data capture will be presented. For this purpose dense 3D point clouds are generated for a test area from three different platforms: first a UAV with a light weight compact camera, second a system using a system camera and finally a medium-format airborne digital camera system. Despite the considerable differences in system costs, suitable results can be derived from all data, especially if large redundancy is available such highly overlapping image blocks are not only beneficial during georeferencing, but are especially advantageous while aiming at a dense and accurate image based 3D surface reconstruction.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns
NASA Astrophysics Data System (ADS)
Dong, Pinliang
2009-10-01
Spatial scale plays an important role in many fields. As a scale-dependent measure for spatial heterogeneity, lacunarity describes the distribution of gaps within a set at multiple scales. In Earth science, environmental science, and ecology, lacunarity has been increasingly used for multiscale modeling of spatial patterns. This paper presents the development and implementation of a geographic information system (GIS) software extension for lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns. Depending on the application requirement, lacunarity analysis can be performed in two modes: global mode or local mode. The extension works for: (1) binary (1-bit) and grey-scale datasets in any raster format supported by ArcGIS and (2) 1D, 2D, and 3D point datasets as shapefiles or geodatabase feature classes. For more effective measurement of lacunarity for different patterns or processes in raster datasets, the extension allows users to define an area of interest (AOI) in four different ways, including using a polygon in an existing feature layer. Additionally, directionality can be taken into account when grey-scale datasets are used for local lacunarity analysis. The methodology and graphical user interface (GUI) are described. The application of the extension is demonstrated using both simulated and real datasets, including Brodatz texture images, a Spaceborne Imaging Radar (SIR-C) image, simulated 1D points on a drainage network, and 3D random and clustered point patterns. The options of lacunarity analysis and the effects of polyline arrangement on lacunarity of 1D points are also discussed. Results from sample data suggest that the lacunarity analysis extension can be used for efficient modeling of spatial patterns at multiple scales.
Biview Learning for Human Posture Segmentation from 3D Points Cloud
Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng
2014-01-01
Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. PMID:24465721
Biview learning for human posture segmentation from 3D points cloud.
Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng
2014-01-01
Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation.
Detectability limitations with 3-D point reconstruction algorithms using digital radiography
Lindgren, Erik
2015-03-31
The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.
Non-rigid registration of 3D point clouds under isometric deformation
NASA Astrophysics Data System (ADS)
Ge, Xuming
2016-11-01
An algorithm for pairwise non-rigid registration of 3D point clouds is presented in the specific context of isometric deformations. The critical step is registration of point clouds at different epochs captured from an isometric deformation surface within overlapping regions. Based on characteristics invariant under isometric deformation, a variant of the four-point congruent sets algorithm is applied to generate correspondences between two deformed point clouds, and subsequently a RANSAC framework is used to complete cluster extraction in preparation for global optimal alignment. Examples are presented and the results compared with existing approaches to demonstrate the two main contributions of the technique: a success rate for generating true correspondences of 90% and a root mean square error after final registration of 2-3 mm.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim
NASA Astrophysics Data System (ADS)
Becker, S.; Peter, M.; Fritsch, D.
2015-03-01
The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.
Steady state reconnection at a single 3D magnetic null point
NASA Astrophysics Data System (ADS)
Galsgaard, K.; Pontin, D. I.
2011-05-01
Aims: We systematically stress a rotationally symmetric 3D magnetic null point by advecting the opposite footpoints of the spine axis in opposite directions. This stress eventually concentrates in the vicinity of the null point, thereby forming a local current sheet through which magnetic reconnection takes place. The aim is to look for a steady state evolution of the current sheet dynamics, which may provide scaling relations for various characteristic parameters of the system. Methods: The evolution is followed by solving numerically the non-ideal MHD equations in a Cartesian domain. The null point is embedded in an initially constant density and temperature plasma. Results: It is shown that a quasi-steady reconnection process can be set up at a 3D null by continuous shear driving. It appears that a true steady state is unlikely to be realised because the current layer tends to grow until it is restricted by the geometry of the computational domain and the imposed driving profile. However, ratios between characteristic quantities clearly settle after some time to stable values, so that the evolution is quasi-steady. The experiments show a number of scaling relations, but they do not provide a clear consensus for extending to lower magnetic resistivity or faster driving velocities. More investigations are needed to fully clarify the properties of current sheets at magnetic null points.
Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors
Ge, Song; Fan, Guoliang
2015-01-01
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673
NASA Astrophysics Data System (ADS)
Stumpf, André; Malet, Jean-Philippe; Allemand, Pascal; Skupinski, Grzegorz; Deseilligny, Marc-Pierrot
2013-04-01
Multi-view stereo surface reconstruction from dense terrestrial photographs is being increasingly applied for geoscience applications such as quantitative geomorphology, and a number of different software solution and processing streamlines have been suggested. For image matching, camera self-calibration and bundle block adjustment, most approaches make use of scale-invariant feature transform (SIFT) to identify homologous points in multiple images. SIFT-like point matching is robust to apparent translation, rotation, and scaling of objects in multiple viewing geometries but the number of correctly identified matching points typically declines drastically with increasing angles between the viewpoints. For the application of multi-view stereo of complex landslide scenes, the viewing geometry is often constrained by the local topography and barriers such as rocks and vegetation occluding the target. Under such conditions it is not uncommon to encounter view angle differences of > 30% that hinder the image matching and eventually prohibit the joint estimation of the camera parameters from all views. Recently an affine invariant extension of the SIFT detector (ASIFT) has been demonstrated to provide more robust matches when large view-angle differences become an issue. In this study the ASIFT detector was adopted to detect homologous points in terrestrial photographs preceding 3D reconstruction of different parts (main scarp, toe) of the Super-Sauze landslide (Southern French Alps). 3D surface models for different time periods and different parts of the landslide were derived using the multi-view stereo framework implemented in MicMac (©IGN). The obtained 3D models were compared with reconstructions using the traditional SIFT detectors as well as alternative structure-from-motion implementations. An estimate of the absolute accuracy of the photogrammetric models was obtained through co-registration and comparison with high-resolution terrestrial LiDAR scans.
2013-01-01
Background Laserscanning recently has become a powerful and common method for plant parameterization and plant growth observation on nearly every scale range. However, 3D measurements with high accuracy, spatial resolution and speed result in a multitude of points that require processing and analysis. The primary objective of this research has been to establish a reliable and fast technique for high throughput phenotyping using differentiation, segmentation and classification of single plants by a fully automated system. In this report, we introduce a technique for automated classification of point clouds of plants and present the applicability for plant parameterization. Results A surface feature histogram based approach from the field of robotics was adapted to close-up laserscans of plants. Local geometric point features describe class characteristics, which were used to distinguish among different plant organs. This approach has been proven and tested on several plant species. Grapevine stems and leaves were classified with an accuracy of up to 98%. The proposed method was successfully transferred to 3D-laserscans of wheat plants for yield estimation. Wheat ears were separated with an accuracy of 96% from other plant organs. Subsequently, the ear volume was calculated and correlated to the ear weight, the kernel weights and the number of kernels. Furthermore the impact of the data resolution was evaluated considering point to point distances between 0.3 and 4.0 mm with respect to the classification accuracy. Conclusion We introduced an approach using surface feature histograms for automated plant organ parameterization. Highly reliable classification results of about 96% for the separation of grapevine and wheat organs have been obtained. This approach was found to be independent of the point to point distance and applicable to multiple plant species. Its reliability, flexibility and its high order of automation make this method well suited for the demands of
Kieffer, Collin; Ladinsky, Mark S; Ninh, Allen; Galimidi, Rachel P; Bjorkman, Pamela J
2017-01-01
Dissemination of HIV-1 throughout lymphoid tissues leads to systemic virus spread following infection. We combined tissue clearing, 3D-immunofluorescence, and electron tomography (ET) to longitudinally assess early HIV-1 spread in lymphoid tissues in humanized mice. Immunofluorescence revealed peak infection density in gut at 10–12 days post-infection when blood viral loads were low. Human CD4+ T-cells and HIV-1–infected cells localized predominantly to crypts and the lower third of intestinal villi. Free virions and infected cells were not readily detectable by ET at 5-days post-infection, whereas HIV-1–infected cells surrounded by pools of free virions were present in ~10% of intestinal crypts by 10–12 days. ET of spleen revealed thousands of virions released by individual cells and discreet cytoplasmic densities near sites of prolific virus production. These studies highlight the importance of multiscale imaging of HIV-1–infected tissues and are adaptable to other animal models and human patient samples. DOI: http://dx.doi.org/10.7554/eLife.23282.001 PMID:28198699
An adaptive learning approach for 3-D surface reconstruction from point clouds.
Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia
2008-06-01
In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.
Non-linear tearing of 3D null point current sheets
Wyper, P. F. Pontin, D. I.
2014-08-15
The manner in which the rate of magnetic reconnection scales with the Lundquist number in realistic three-dimensional (3D) geometries is still an unsolved problem. It has been demonstrated that in 2D rapid non-linear tearing allows the reconnection rate to become almost independent of the Lundquist number (the “plasmoid instability”). Here, we present the first study of an analogous instability in a fully 3D geometry, defined by a magnetic null point. The 3D null current layer is found to be susceptible to an analogous instability but is marginally more stable than an equivalent 2D Sweet-Parker-like layer. Tearing of the sheet creates a thin boundary layer around the separatrix surface, contained within a flux envelope with a hyperbolic structure that mimics a spine-fan topology. Efficient mixing of flux between the two topological domains occurs as the flux rope structures created during the tearing process evolve within this envelope. This leads to a substantial increase in the rate of reconnection between the two domains.
A new approach for semi-automatic rock mass joints recognition from 3D point clouds
NASA Astrophysics Data System (ADS)
Riquelme, Adrián J.; Abellán, A.; Tomás, R.; Jaboyedoff, M.
2014-07-01
Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information - synthetic and 3D scanned data - were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.
Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration.
Yang, Jiaolong; Li, Hongdong; Campbell, Dylan; Jia, Yunde
2016-11-01
The Iterative Closest Point (ICP) algorithm is one of the most widely used methods for point-set registration. However, being based on local iterative optimization, ICP is known to be susceptible to local minima. Its performance critically relies on the quality of the initialization and only local optimality is guaranteed. This paper presents the first globally optimal algorithm, named Go-ICP, for Euclidean (rigid) registration of two 3D point-sets under the L2 error metric defined in ICP. The Go-ICP method is based on a branch-and-bound scheme that searches the entire 3D motion space SE(3). By exploiting the special structure of SE(3) geometry, we derive novel upper and lower bounds for the registration error function. Local ICP is integrated into the BnB scheme, which speeds up the new method while guaranteeing global optimality. We also discuss extensions, addressing the issue of outlier robustness. The evaluation demonstrates that the proposed method is able to produce reliable registration results regardless of the initialization. Go-ICP can be applied in scenarios where an optimal solution is desirable or where a good initialization is not always available.
PointCloudExplore 2: Visual exploration of 3D gene expression
International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd
2008-03-31
To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.
On concise 3-D simple point characterizations: a marching cubes paradigm.
Huang, Adam; Liu, Hon-Man; Lee, Chung-Wei; Yang, Chung-Yi; Tsang, Yuk-Ming
2009-01-01
The centerlines of tubular structures are useful for medical image visualization and computer-aided diagnosis applications. They can be effectively extracted by using a thinning algorithm that erodes an object layer by layer until only a skeleton is left. An object point is "simple" and can be safely deleted only if the resultant image is topologically equivalent to the original. Numerous characterizations of 3-D simple points based on digital topology already exist. However, little work has been done in the context of marching cubes (MC). This paper reviews several concise 3-D simple point characterizations in a MC paradigm. By using the Euler characteristic and a few newly observed properties in the context of connectivity-consistent MC, we present concise and more self-explanatory proofs. We also present an efficient method for computing the Euler characteristic locally for MC surfaces. Performance evaluations on different implementations are conducted on synthetic data and multidetector computed tomography examination of virtual colonoscopy and angiography.
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and
3D registration method based on scattered point cloud from B-model ultrasound image
NASA Astrophysics Data System (ADS)
Hu, Lei; Xu, Xiaojun; Wang, Lifeng; Guo, Na; Xie, Feng
2017-01-01
The paper proposes a registration method on 3D point cloud of the bone tissue surface extracted by B-mode ultrasound image and the CT model . The B-mode ultrasound is used to get two-dimensional images of the femur tissue . The binocular stereo vision tracker is used to obtain spatial position and orientation of the optical positioning device fixed on the ultrasound probe. The combining of the two kind of data generates 3D point cloud of the bone tissue surface. The pixel coordinates of the bone surface are automatically obtained from ultrasound image using an improved local phase symmetry (phase symmetry, PS) . The mapping of the pixel coordinates on the ultrasound image and 3D space is obtained through a series of calibration methods. In order to detect the effect of registration, six markers are implanted on a complete fresh pig femoral .The actual coordinates of the marks are measured with two methods. The first method is to get the coordinates with measuring tools under a coordinate system. The second is to measure the coordinates of the markers in the CT model registered with 3D point cloud using the ICP registration algorithm under the same coordinate system. Ten registration experiments are carried out in the same way. Error results are obtained by comparing the two sets of mark point coordinates obtained by two different methods. The results is that a minimum error is 1.34mm, the maximum error is 3.22mm,and the average error of 2.52mm; ICP registration algorithm calculates the average error of 0.89mm and a standard deviation of 0.62mm.This evaluation standards of registration accuracy is different from the average error obtained by the ICP registration algorithm. It can be intuitive to show the error caused by the operation of clinical doctors. Reference to the accuracy requirements of different operation in the Department of orthopedics, the method can be apply to the bone reduction and the anterior cruciate ligament surgery.
PointCloudXplore: a visualization tool for 3D gene expressiondata
Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd
2006-10-01
The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.
A successive three-point scheme for fast ray tracing in complex 3D geological models
NASA Astrophysics Data System (ADS)
Li, F.; Xu, T.; Zhang, M.; Zhang, Z.
2013-12-01
We present a new 3D ray-tracing method that can be applied to computations of traveltime and ray-paths of seismic transmitted, reflected and turning waves in complex geologic models, which consist of arbitrarily shaped blocks whose boundaries are matched by triangulated interfaces for computational efficiency. The new ray-tracing scheme combines the segmentally iterative ray tracing (SIRT) method and the pseudo-bending scheme so as to become a robust and fast ray-tracing method for seismic waves. The new method is extension of our previous constant block models and constant gradient block models to generally heterogeneous block models, and incorporates triangulated interfaces defining boundaries of complex geological bodies, so that it becomes applicable for practical problems. A successive three-point perturbation scheme is formulated that iteratively updates the midpoints of a segment based on an initial ray-path. The corrections of the midpoints are accomplished by first-order analytic formulae according to locations of the midpoint inside the block or on the boundaries of the blocks, to which the updating formulae of the pseudo-bending method and SIRT algorithm are applied instead of the traditional iterative methods. Numerical experiments, including an example in the Bohemian Massif, demonstrate that successive three-point scheme is effective and capable for kinematic ray tracing in complex 3D heterogeneous media.
3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds
Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro
2015-01-01
3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723
3D modeling of building indoor spaces and closed doors from imagery and point clouds.
Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro
2015-02-03
3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.
Unconventional superconductivity at mesoscopic point contacts on the 3D Dirac semimetal Cd3As2
NASA Astrophysics Data System (ADS)
Aggarwal, Leena; Gaurav, Abhishek; Thakur, Gohil S.; Haque, Zeba; Ganguli, Ashok K.; Sheet, Goutam
2016-01-01
Three-dimensional (3D) Dirac semimetals exist close to topological phase boundaries which, in principle, should make it possible to drive them into exotic new phases, such as topological superconductivity, by breaking certain symmetries. A practical realization of this idea has, however, hitherto been lacking. Here we show that the mesoscopic point contacts between pure silver (Ag) and the 3D Dirac semimetal Cd3As2 (ref. ) exhibit unconventional superconductivity with a critical temperature (onset) greater than 6 K whereas neither Cd3As2 nor Ag are superconductors. A gap amplitude of 6.5 meV is measured spectroscopically in this phase that varies weakly with temperature and survives up to a remarkably high temperature of 13 K, indicating the presence of a robust normal-state pseudogap. The observations indicate the emergence of a new unconventional superconducting phase that exists in a quantum mechanically confined region under a point contact between a Dirac semimetal and a normal metal.
Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.
2016-06-01
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
Implicit Shape Models for Object Detection in 3d Point Clouds
NASA Astrophysics Data System (ADS)
Velizhev, A.; Shapovalov, R.; Schindler, K.
2012-07-01
We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.
Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud
NASA Astrophysics Data System (ADS)
Chen, Jianqin; Zhu, Hehua; Li, Xiaojun
2016-10-01
This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.
Analytical expression of a long exposure coronagraphic point spread function
NASA Astrophysics Data System (ADS)
Herscovici-Schiller, Olivier; Mugnier, Laurent M.; Sauvage, Jean-François; Le Duigou, Jean-Michel; Cantalloube, Faustine
2016-07-01
The resolution of coronagraphic high contrast exoplanet imaging devices such as SPHERE is limited by quasistatic aberrations. These aberrations produce speckles that can be mistaken for planets in the image. In order to design instruments, correct quasi-static aberrations or analyze data, the expression of the point spread function of a coronagraphic telescope in the presence of residual turbulence is useful. We have derived an analytic formula for this point spread function. We explain physically its structure, we validate it by numerical simulations and we show that it is computationally efficient.
3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups
ERIC Educational Resources Information Center
Scalfani, Vincent F.; Vaid, Thomas P.
2014-01-01
Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…
A multi-resolution fractal additive scheme for blind watermarking of 3D point data
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Wilder, Kathy; Fox, Kevin
2013-05-01
We present a fractal feature space for 3D point watermarking to make geospatial systems more secure. By exploiting the self similar nature of fractals, hidden information can be spatially embedded in point cloud data in an acceptable manner as described within this paper. Our method utilizes a blind scheme which provides automatic retrieval of the watermark payload without the need of the original cover data. Our method for locating similar patterns and encoding information in LiDAR point cloud data is accomplished through a look-up table or code book. The watermark is then merged into the point cloud data itself resulting in low distortion effects. With current advancements in computing technologies, such as GPGPUs, fractal processing is now applicable for processing of big data which is present in geospatial as well as other systems. This watermarking technique described within this paper can be important for systems where point data is handled by numerous aerial collectors including analysts use for systems such as a National LiDAR Data Layer.
Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques
Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva
2011-12-15
Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE registered ). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to {approx}13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE registered system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by {approx}9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT{approx}18% and {approx}42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE registered and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%-4%). PDD values at 2 cm depth varied from {approx}72% for the 40 mm field, down to {approx}55% for the 1 mm field. EBT and PRESAGE registered PDDs agreed within {approx}3% in the typical therapy region (1-4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm
Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software
NASA Astrophysics Data System (ADS)
Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander
2016-06-01
Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features
Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie
2014-01-01
Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694
Recognizing objects in 3D point clouds with multi-scale local features.
Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie
2014-12-15
Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.
Point scanning confocal microscopy facilitates 3D human hair follicle imaging in tissue sections.
Kloepper, Jennifer E; Bíró, Tamás; Paus, Ralf; Cseresnyés, Zoltán
2010-07-01
Efficiency is a key factor in determining whether a scientific method becomes widely accepted in practical applications. In dermatology, morphological characterisation of intact hair follicles by traditional methods can be rather inefficient. Samples are embedded, sliced, imaged and digitally reconstructed, which can be time-consuming. Confocal microscopy, on the other hand, is more efficient and readily applicable to study intact hair follicles. Modern confocal microscopes deliver and collect light very efficiently and thus allow high spatial resolution imaging of relatively thick samples. In this letter, we report that we successfully imaged entire intact human hair follicles using point scanning confocal microscopy. Light delivery and light-collection were further improved by preparing the samples in 2,2'-Thiodiethanol (TDE), thus reducing refractive index gradients. The relatively short total scan times and the high quality of the acquired 3D images make confocal microscopy a desirable method for studying intact hair follicles under normal and pathological conditions.
Feature extraction from 3D lidar point clouds using image processing methods
NASA Astrophysics Data System (ADS)
Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming
2011-10-01
Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.
NASA Technical Reports Server (NTRS)
Folta, David; Bauer, Frank H. (Technical Monitor)
2001-01-01
The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.
Knowledge guided object detection and identification in 3D point clouds
NASA Astrophysics Data System (ADS)
Karmacharya, A.; Boochs, F.; Tietz, B.
2015-05-01
Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.
NASA Astrophysics Data System (ADS)
Shechtman, Yoav; Weiss, Lucien E.; Backer, Adam S.; Moerner, William E.
2016-02-01
We extend the information content of the microscope's point-spread-function (PSF) by adding a new degree of freedom: spectral information. We demonstrate controllable encoding of a microscopic emitter's spectral information (color) and 3D position in the shape of the microscope's PSF. The design scheme works by exploiting the chromatic dispersion of an optical element placed in the optical path. By using numerical optimization we design a single physical pattern that yields different desired phase delay patterns for different wavelengths. To demonstrate the method's applicability experimentally, we apply it to super-resolution imaging and to multiple particle tracking.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
Calibration of an outdoor distributed camera network with a 3D point cloud.
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-07-29
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).
Estimability of thrusting trajectories in 3-D from a single passive sensor with unknown launch point
NASA Astrophysics Data System (ADS)
Yuan, Ting; Bar-Shalom, Yaakov; Willett, Peter; Ben-Dov, R.; Pollak, S.
2013-09-01
The problem of estimating the state of thrusting/ballistic endoatmospheric projectiles moving in 3-dimensional (3-D) space using 2-dimensional (2-D) measurements from a single passive sensor is investigated. The location of projectile's launch point (LP) is unavailable and this could significantly affect the performance of the estimation and the IPP. The LP altitude is then an unknown target parameter. The estimability is analyzed based on the Fisher Information Matrix (FIM) of the target parameter vector, comprising the initial launch (azimuth and elevation) angles, drag coefficient, thrust and the LP altitude, which determine the trajectory according to a nonlinear motion equation. The full rank of the FIM ensures that one has an estimable target parameters. The corresponding Craḿer-Rao lower bound (CRLB) quantifies the estimation performance of the estimator that is statistically efficient and can be used for IPP. In view of the inherent nonlinearity of the problem, the maximum likelihood (ML) estimate of the target parameter vector is found by using a mixed (partially grid-based) search approach. For a selected grid in the drag-coefficient-thrust-altitude subspace, the proposed parallelizable approach is shown to have reliable estimation performance and further leads to the final IPP of high accuracy.
Walsh-Hadamard Based 3D Steganography for Protecting Sensitive Information in Point-of-Care.
Abuadbba, Alsharif; Khalil, Ibrahim
2016-11-29
Remote points-of-care has recently had a lot of attention for their advantages such as saving lives and cost reduction. The transmitted streams usually contain (1) normal biomedical signals (e.g. ECG) and (2) highly private information (e.g. patient identity). Despite the obvious advantages, the primary concerns are privacy and authenticity of the transferred data. Therefore, this paper introduces a novel steganographic mechanism that ensures (1) strong privacy preservation of private information by random concealing inside the transferred signals employing a key, and (2) evidence of originality for the biomedical signals. To maximize hiding, Fast Walsh-Hadamard Transform is utilized to transform the signals into a group of coefficients. To ensure the lowest distortion, only less-significant values of coefficients are employed. To strengthen security, the key is utilized in a 3-Dimensional random coefficients' reform to produce a 3D order employed in the concealing process. The resultant distortion has been thoroughly measured in all stages. After extensive experiments on three types of signals, it has been proven that the algorithm has little impact on the genuine signals (< 1 %). The security evaluation also confirms that unlawful retrieval of the hidden information within rational time is mightily improbable.
Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-01-01
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-11-15
achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747
NASA Astrophysics Data System (ADS)
Hu, Bin; Kieweg, Sarah
2010-11-01
Gravity-driven thin film flow down an incline is studied for optimal design of polymeric drug delivery vehicles, such as anti-HIV topical microbicides. We develop a 3D FEM model using non-Newtonian mechanics to model the flow of gels in response to gravity, surface tension and shear-thinning. Constant volume setup is applied within the lubrication approximation scope. The lengthwise profiles of the 3D model agree with our previous 2D finite difference model, while the transverse contact line patterns of the 3D model are compared to the experiments. With incorporation of surface tension, capillary ridges are observed at the leading front in both 2D and 3D models. Previously published studies show that capillary ridge can amplify the fingering instabilities in transverse direction. Sensitivity studies (2D & 3D) and experiments are carried out to describe the influence of surface tension and shear-thinning on capillary ridge and fingering instabilities.
Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)
NASA Astrophysics Data System (ADS)
Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane
2016-04-01
Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-06-12
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived.
Point-spread function synthesis in scanning holographic microscopy
Indebetouw, Guy; Zhong, Wenwei; Chamberlin-Long, David
2006-01-01
Scanning holographic microscopy is a two-pupil synthesis method allowing the capture of single-sideband inline holograms of noncoherent (e.g., fluorescent) three-dimensional specimens in a single two-dimensional scan. The flexibility offered by the two-pupil method in synthesizing unusual point-spread functions is discussed. We illustrate and compare three examples of holographic recording, using computer simulations. The first example is the classical hologram in which each object point is encoded as a spherical wave. The second example uses pupils with spherical phase distributions having opposite curvatures, leading to reconstructed images with a resolution limit that is half that of the objective. In the third example, axicon pupils are used to obtain axially sectioned images. PMID:16783435
Joint angle variability in 3D bimanual pointing: uncontrolled manifold analysis.
Domkin, Dmitry; Laczko, Jozsef; Djupsjöbacka, Mats; Jaric, Slobodan; Latash, Mark L
2005-05-01
The structure of joint angle variability and its changes with practice were investigated using the uncontrolled manifold (UCM) computational approach. Subjects performed fast and accurate bimanual pointing movements in 3D space, trying to match the tip of a pointer, held in the right hand, with the tip of one of three different targets, held in the left hand during a pre-test, several practice sessions and a post-test. The prediction of the UCM approach about the structuring of joint angle variance for selective stabilization of important task variables was tested with respect to selective stabilization of time series of the vectorial distance between the pointer and aimed target tips (bimanual control hypothesis) and with respect to selective stabilization of the endpoint trajectory of each arm (unimanual control hypothesis). The components of the total joint angle variance not affecting (V(COMP)) and affecting (V(UN)) the value of a selected task variable were computed for each 10% of the normalized movement time. The ratio of these two components R(V)=V(COMP)/V(UN) served as a quantitative index of selective stabilization. Both the bimanual and unimanual control hypotheses were supported, however the R(V) values for the bimanual hypothesis were significantly higher than those for the unimanual hypothesis applied to the left and right arm both prior to and after practice. This suggests that the CNS stabilizes the relative trajectory of one endpoint with respect to the other more than it stabilizes the trajectories of each of the endpoints in the external space. Practice-associated improvement in both movement speed and accuracy was accompanied by counter-intuitive lack of changes in R(V). Both V(COMP) and V(UN) variance components decreased such that their ratio remained constant prior to and after practice. We conclude that the UCM approach offers a unique and under-explored opportunity to track changes in the organization of multi-effector systems with practice
Automated 3D motion tracking using Gabor filter bank, robust point matching, and deformable models.
Chen, Ting; Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon
2010-01-01
Tagged magnetic resonance imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the robust point matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of 1) through-plane motion and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the moving least square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method
Establishing point correspondence of 3D faces via sparse facial deformable model.
Pan, Gang; Zhang, Xiaobo; Wang, Yueming; Hu, Zhenfang; Zheng, Xiaoxiang; Wu, Zhaohui
2013-11-01
Establishing a dense vertex-to-vertex anthropometric correspondence between 3D faces is an important and fundamental problem in 3D face research, which can contribute to most applications of 3D faces. This paper proposes a sparse facial deformable model to automatically achieve this task. For an input 3D face, the basic idea is to generate a new 3D face that has the same mesh topology as a reference face and the highly similar shape to the input face, and whose vertices correspond to those of the reference face in an anthropometric sense. Two constraints: 1) the shape constraint and 2) correspondence constraint are modeled in our method to satisfy the three requirements. The shape constraint is solved by a novel face deformation approach in which a normal-ray scheme is integrated to the closest-vertex scheme to keep high-curvature shapes in deformation. The correspondence constraint is based on an assumption that if the vertices on 3D faces are corresponded, their shape signals lie on a manifold and each face signal can be represented sparsely by a few typical items in a dictionary. The dictionary can be well learnt and contains the distribution information of the corresponded vertices. The correspondence information can be conveyed to the sparse representation of the generated 3D face. Thus, a patch-based sparse representation is proposed as the correspondence constraint. By solving the correspondence constraint iteratively, the vertices of the generated face can be adjusted to correspondence positions gradually. At the early iteration steps, smaller sparsity thresholds are set that yield larger representation errors but better globally corresponded vertices. At the later steps, relatively larger sparsity thresholds are used to encode local shapes. By this method, the vertices in the new face approach the right positions progressively until the final global correspondence is reached. Our method is automatic, and the manual work is needed only in training procedure
NASA Astrophysics Data System (ADS)
Shepherd, Lucas; Cassak, P.; Drake, J.; Gosling, J.; Phan, T.; Shay, M. A.
2013-07-01
In two-ribbon flares, the fact that the ribbons separate in time is considered evidence of magnetic reconnection. However, in addition to the ribbons separating, they can also elongate (as seen in animations of, for example, the Bastille Day flare). The elongation is undoubtedly related to the reconnection spreading in the out-of-plane direction. Indeed, naturally occurring magnetic reconnection generally begins in a spatially localized region and spreads in the direction perpendicular to the reconnection plane as time progresses. For example, it was suggested that X-line spreading is necessary to explain the observation of X-lines extending more than 390 Earth radii (Phan et al., Nature, 404, 848, 2006), and has been seen in reconnection experiments. A sizeable out-of-plane (guide) magnetic field is present at flare sites and in the solar wind. Here, we study the effect of dissipation mechanism and the strength of the guide field has on X-line spreading. We present results from three-dimensional numerical simulations of magnetic reconnection, comparing spreading with the Hall term to spreading with anomalous resistivity. Applications to solar flares and magnetic reconnection in the solar wind will be discussed.
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Fang, Lina; Li, Jonathan
2013-05-01
Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.
On the Estimation of Forest Resources Using 3D Remote Sensing Techniques and Point Cloud Data
NASA Astrophysics Data System (ADS)
Karjalainen, Mika; Karila, Kirsi; Liang, Xinlian; Yu, Xiaowei; Huang, Guoman; Lu, Lijun
2016-08-01
In recent years, 3D capable remote sensing techniques have shown great potential in forest biomass estimation because of their ability to measure the forest canopy structure, tree height and density. The objective of the Dragon3 forest resources research project (ID 10667) and the supporting ESA young scientist project (ESA contract NO. 4000109483/13/I-BG) was to study the use of satellite based 3D techniques in forest tree height estimation, and consequently in forest biomass and biomass change estimation, by combining satellite data with terrestrial measurements. Results from airborne 3D techniques were also used in the project. Even though, forest tree height can be estimated from 3D satellite SAR data to some extent, there is need for field reference plots. For this reason, we have also been developing automated field plot measurement techniques based on Terrestrial Laser Scanning data, which can be used to train and calibrate satellite based estimation models. In this paper, results of canopy height models created from TerraSAR-X stereo and TanDEM-X INSAR data are shown as well as preliminary results from TLS field plot measurement system. Also, results from the airborne CASMSAR system to measure forest canopy height from P- and X- band INSAR are presented.
Supergridded cone-beam reconstruction and its application to point-spread function calculation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Ning, Ruola
2005-08-01
In cone-beam computed tomography (CBCT), the volumetric reconstruction may in principle assume an arbitrarily fine grid. The supergridded cone-beam reconstruction refers to reconstructing the object domain or a subvolume thereof with a grid that is finer than the proper computed tomography sampling grid (as determined by gantry geometry and detector discreteness). This technique can naturally reduce the voxelization effect, thereby retaining more details for object reproduction. The grid refinement is usually limited to two or three refinement levels because the detail pursuit is eventually limited by the detector discreteness. The volume reconstruction is usually targeted to a local volume of interest due to the cubic growth in a three-dimensional (3D) array size. As an application, we used this technique for 3D point-spread function (PSF) measurement of a CBCT system by reconstructing edge spread profiles in a refined grid. Through an experiment with a Teflon ball on a CBCT system, we demonstrated the supergridded volume reconstruction (based on a Feldcamp algorithm) and the CBCT PSF measurement (based on an edge-blurring technique). In comparison with a postreconstruction image refinement technique (upsampling and interpolation), the supergridded reconstruction could produce better PSFs (in terms of a smaller FWHM and PSF fitting error).
Laser confocal microscope with wavelet-profiled point spread function
NASA Astrophysics Data System (ADS)
Romero, Mary Jacquiline; Bautista, Godofredo; Daria, Vincent Ricardo; Saloma, Caesar
2010-04-01
We report a laser-scanning confocal reflectance microscope with a wavelet-profiled point spread function (PSF) for rapid multi-resolution extraction and analysis of microscopic object features. The PSF is generated via holography by encoding a π-phase shifting disk unto a collimated laser beam via a phase-only spatial light modulator (SLM) that is positioned at the pupil plane of the focusing objective lens. Scaling of the transverse PSF distribution is achieved by selecting a suitable ratio of the π-phase shifting disk radius and the pupil aperture radius. With one and the same objective lens and one SLM to control the phase profile of the pupil function, we produce amplitude PSF distributions that are accurate scaled representations of the circularly-symmetric Mexican hat mother wavelet.
Multicolour localization microscopy by point-spread-function engineering
NASA Astrophysics Data System (ADS)
Shechtman, Yoav; Weiss, Lucien E.; Backer, Adam S.; Lee, Maurice Y.; Moerner, W. E.
2016-09-01
Super-resolution microscopy has revolutionized cellular imaging in recent years. Methods that rely on sequential localization of single point emitters enable spatial tracking at a resolution of ˜10-40 nm. Moreover, tracking and imaging in three dimensions is made possible by various techniques, including point-spread-function (PSF) engineering—namely, encoding the axial (z) position of a point source in the shape that it creates in the image plane. However, efficient multicolour imaging remains a challenge for localization microscopy—a task of the utmost importance for contextualizing biological data. Normally, multicolour imaging requires sequential imaging, multiple cameras or segmented dedicated fields of view. Here, we demonstrate an alternate strategy: directly encoding the spectral information (colour), in addition to three-dimensional position, in the image. By exploiting chromatic dispersion we design a new class of optical phase masks that simultaneously yield controllably different PSFs for different wavelengths, enabling simultaneous multicolour tracking or super-resolution imaging in a single optical path.
Stellar photometry and astrometry with discrete point spread functions
NASA Astrophysics Data System (ADS)
Mighell, Kenneth J.
2005-08-01
The key features of the MATPHOT algorithm for precise and accurate stellar photometry and astrometry using discrete point spread functions (PSFs) are described. A discrete PSF is a sampled version of a continuous PSF, which describes the two-dimensional probability distribution of photons from a point source (star) just above the detector. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or an FITS (Flexible Image Transport System) image file. Discrete PSFs are shifted within an observational model using a 21-pixel-wide damped sinc function, and position-partial derivatives are computed using a five-point numerical differentiation formula. Precise and accurate stellar photometry and astrometry are achieved with undersampled CCD (charge-coupled device) observations by using supersampled discrete PSFs that are sampled two, three or more times more finely than the observational data. The precision and accuracy of the MATPHOT algorithm is demonstrated by using the C-language MPD code to analyse simulated CCD stellar observations; measured performance is compared with a theoretical performance model. Detailed analysis of simulated Next Generation Space Telescope observations demonstrate that millipixel relative astrometry and mmag photometric precision is achievable with complicated space-based discrete PSFs.
A Point Spread Function for the EPOXI Mission
NASA Technical Reports Server (NTRS)
Barry, Richard K.
2010-01-01
The Extrasolar Planet Observation Characterization and the Deep Impact Extended Investigation missions (EPOXI) are currently observing the transits of exoplanets, two comet nuclei at short range, and the Earth and Mars using the High Resolution Instrument (HRI) - a 0.3 m f/35 telescope on the Deep Impact probe. The HRI is in a permanently defocused state with the instrument pOint of focus about 0.6 cm before the focal plane due to the use of a reference flat mirror that took a power during ground thermal-vacuum testing. Consequently, the point spread function (PSF) covers approximately nine pixels FWHM and is characterized by a patch with three-fold symmetry due to the three-point support structures of the primary and secondary mirrors. The PSF is also strongly color dependent varying in shape and size with change in filtration and target color. While defocus is highly desirable for exoplanet transit observations to limit sensitivity to intra-pixel variation, it is suboptimal for observations of spatially resolved targets. Consequently, all images used in our analysis of such objects were deconvolved with an instrument PSF. The instrument PSF is also being used to optimize transit analysis. We discuss development and usage of an instrument PSF for these observations.
Goldberg, K.A. |; Tejnil, E.; Bokor, J. |
1995-12-01
A 3-D electromagnetic field simulation is used to model the propagation of extreme ultraviolet (EUV), 13-nm, light through sub-1500 {Angstrom} dia pinholes in a highly absorptive medium. Deviations of the diffracted wavefront phase from an ideal sphere are studied within 0.1 numerical aperture, to predict the accuracy of EUV point diffraction interferometersused in at-wavelength testing of nearly diffraction-limited EUV optical systems. Aberration magnitudes are studied for various 3-D pinhole models, including cylindrical and conical pinhole bores.
NASA Technical Reports Server (NTRS)
Hassan, M. I.; Kuwana, K.; Saito, K.
2001-01-01
In the past, we measured three-D flow structure in the liquid and gas phases that were created by a spreading flame over liquid fuels. In that effort, we employed several different techniques including our original laser sheet particle tracking (LSPT) technique, which is capable of measuring transient 2-D flow structures. Recently we obtained a state-of-the-art integrated particle image velocimetry (IPIV), whose function is similar to LSPT, but it has an integrated data recording and processing system. To evaluate the accuracy of our IPIV system, we conducted a series of flame spread tests using the same experimental apparatus that we used in our previous flame spread studies and obtained a series of 2-D flow profiles corresponding to our previous LSPT measurements. We confirmed that both LSPT and IPIV techniques produced similar data, but IPIV data contains more detailed flow structures than LSPT data. Here we present some of newly obtained IPIV flow structure data, and discuss the role of gravity in the flame-induced flow structures. Note that the application of IPIV to our flame spread problems is not straightforward, and it required several preliminary tests for its accuracy including this IPIV comparison to LSPT.
Attribute-based point cloud visualization in support of 3-D classification
NASA Astrophysics Data System (ADS)
Zlinszky, András; Otepka, Johannes; Kania, Adam
2016-04-01
Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large
a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation
NASA Astrophysics Data System (ADS)
Kıvılcım, C. Ö.; Duran, Z.
2016-06-01
The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.
Riazi, Z; Afarideh, H; Sadighi-Bonabi, R
2011-09-01
Based on the determination of protons fluence at the phantom's surface, a 3D dose distribution is calculated inside a water phantom using a fast method. The dose contribution of secondary particles, originating from inelastic nuclear interactions, is also taken into account. This is achieved by assuming that 60% of the energy transferred to secondary particles is locally absorbed. Secondary radiation delivers approximately 16.8% of the total dose in the plateau region of the Bragg curve for monoenergetic protons of energy 190 MeV. The physical dose beyond the Bragg peak is obtained for a proton beam of 190 MeV using a Geant4 simulation. It is found that the dose beyond the Bragg peak is <0.02% of the maximum dose and is mainly delivered by protons produced via reactions of the secondary neutrons. The relative dose profile is also calculated by simulation of the proposed beam line in Geant4 code. The dose profile produced by our method agrees, within 2%, with the results predicted by the Fermi Eyges distribution function and the results of the Geant4 simulation. It is expected that the fast numerical approach proposed herein may be utilised in 3D deterministic treatment planning programs, to model proton propagation in order to analyse the effect of modifying the beam line.
Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping
NASA Astrophysics Data System (ADS)
Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.
2016-06-01
During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable
NASA Astrophysics Data System (ADS)
Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.
2015-05-01
As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made
A standard test method based on point spread function for three-dimensional imaging system
NASA Astrophysics Data System (ADS)
Wen, Tao; Dong, Jinxin; Hu, Zhixiong; Cao, Zhenggang; Liu, Wenli; Wang, Jianlin
2016-09-01
Point spread function (PSF) theory has been demonstrated as proof of concept in evaluation of spatial resolution of three dimensional imaging technology like optical coherence tomography (OCT) and confocal microscopy. A robust test target and associated evaluation algorithm are in demand for keeping regular quality assurance and inter-comparison of such 3D imaging system performance. To achieve this goal, standard-size micro spheres were utilized to develop PSF phantoms. The OCT system was investigated with the microsphere PSF phantom. Differing from previous study, a statistical model comprising data from hundreds of scatterers was established to acquire the PSF distribution and variation. The research provided an effective method and a set of practical standard phantoms for evaluating resolution of three dimensional imaging modalities.
Eddy-Current Sensors with Asymmetrical Point Spread Function
Gajda, Janusz; Stencel, Marek
2016-01-01
This paper concerns a special type of eddy-current sensor in the form of inductive loops. Such sensors are applied in the measuring systems classifying road vehicles. They usually have a rectangular shape with dimensions of 1 × 2 m, and are installed under the surface of the traffic lane. The wide Point Spread Function (PSF) of such sensors causes the information on chassis geometry, contained in the measurement signal, to be strongly averaged. This significantly limits the effectiveness of the vehicle classification. Restoration of the chassis shape, by solving the inverse problem (deconvolution), is also difficult due to the fact that it is ill-conditioned. An original approach to solving this problem is presented in this paper. It is a hardware-based solution and involves the use of inductive loops with an asymmetrical PSF. Laboratory experiments and simulation tests, conducted with models of an inductive loop, confirmed the effectiveness of the proposed solution. In this case, the principle applies that the higher the level of sensor spatial asymmetry, the greater the effectiveness of the deconvolution algorithm. PMID:27782033
Point spread function determination for Keck adaptive optics
NASA Astrophysics Data System (ADS)
Ragland, S.; Jolissaint, L.; Wizinowich, P.; van Dam, M. A.; Mugnier, L.; Bouxin, A.; Chock, J.; Kwok, S.; Mader, J.; Witzel, G.; Do, Tuan; Fitzgerald, M.; Ghez, A.; Lu, J.; Martinez, G.; Morris, M. R.; Sitarski, B.
2016-07-01
One of the primary scientific limitations of adaptive optics (AO) has been the incomplete knowledge of the point spread function (PSF), which has made it difficult to use AO for accurate photometry and astrometry in both crowded and sparse fields, for extracting intrinsic morphologies and spatially resolved kinematics, and for detecting faint sources in the presence of brighter sources. To address this limitation, we initiated a program to determine and demonstrate PSF reconstruction for science observations obtained with Keck AO. This paper aims to give a broad view of the progress achieved in implementing a PSF reconstruction capability for Keck AO science observations. This paper describes the implementation of the algorithms, and the design and development of the prototype operational tools for automated PSF reconstruction. On-sky performance is discussed by comparing the reconstructed PSFs to the measured PSF's on the NIRC2 science camera. The importance of knowing the control loop performance, accurate mapping of the telescope pupil to the deformable mirror and the science instrument pupil, and the telescope segment piston error are highlighted. We close by discussing lessons learned and near-term future plans.
Point spread function engineering for iris recognition system design.
Ashok, Amit; Neifeld, Mark A
2010-04-01
Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.
Walke, Mathias; Gademann, Günther
2017-01-01
An optical 3D sensor provides an additional tool for verification of correct patient settlement on a Tomotherapy treatment machine. The patient's position in the actual treatment is compared with the intended position defined in treatment planning. A commercially available optical 3D sensor measures parts of the body surface and estimates the deviation from the desired position without markers. The registration precision of the in-built algorithm and of selected ICP (iterative closest point) algorithms is investigated on surface data of specially designed phantoms captured by the optical 3D sensor for predefined shifts of the treatment table. A rigid body transform is compared with the actual displacement to check registration reliability for predefined limits. The curvature type of investigated phantom bodies has a strong influence on registration result which is more critical for surfaces of low curvature. We investigated the registration accuracy of the optical 3D sensor for the chosen phantoms and compared the results with selected unconstrained ICP algorithms. Safe registration within the clinical limits is only possible for uniquely shaped surface regions, but error metrics based on surface normals improve translational registration. Large registration errors clearly hint at setup deviations, whereas small values do not guarantee correct positioning. PMID:28163773
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Non-magnetic photospheric bright points in 3D simulations of the solar atmosphere
NASA Astrophysics Data System (ADS)
Calvo, F.; Steiner, O.; Freytag, B.
2016-11-01
Context. Small-scale bright features in the photosphere of the Sun, such as faculae or G-band bright points, appear in connection with small-scale magnetic flux concentrations. Aims: Here we report on a new class of photospheric bright points that are free of magnetic fields. So far, these are visible in numerical simulations only. We explore conditions required for their observational detection. Methods: Numerical radiation (magneto-)hydrodynamic simulations of the near-surface layers of the Sun were carried out. The magnetic field-free simulations show tiny bright points, reminiscent of magnetic bright points, only smaller. A simple toy model for these non-magnetic bright points (nMBPs) was established that serves as a base for the development of an algorithm for their automatic detection. Basic physical properties of 357 detected nMBPs were extracted and statistically evaluated. We produced synthetic intensity maps that mimic observations with various solar telescopes to obtain hints on their detectability. Results: The nMBPs of the simulations show a mean bolometric intensity contrast with respect to their intergranular surroundings of approximately 20%, a size of 60-80 km, and the isosurface of optical depth unity is at their location depressed by 80-100 km. They are caused by swirling downdrafts that provide, by means of the centripetal force, the necessary pressure gradient for the formation of a funnel of reduced mass density that reaches from the subsurface layers into the photosphere. Similar, frequently occurring funnels that do not reach into the photosphere, do not produce bright points. Conclusions: Non-magnetic bright points are the observable manifestation of vertically extending vortices (vortex tubes) in the photosphere. The resolving power of 4-m-class telescopes, such as the DKIST, is needed for an unambiguous detection of them. The movie associated to Fig. 1 is available at http://www.aanda.org
Majdak, Piotr; Goupell, Matthew J.; Laback, Bernhard
2010-01-01
The ability to localize sound sources in three-dimensional space was tested in humans. In experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE) (darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In experiment 2, subjects were provided sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies. PMID:20139459
A closed-form expression of the positional uncertainty for 3D point clouds.
Bae, Kwang-Ho; Belton, David; Lichti, Derek D
2009-04-01
We present a novel closed-form expression of positional uncertainty measured by a near-monostatic and time-of-flight laser range finder with consideration of its measurement uncertainties. An explicit form of the angular variance of the estimated surface normal vector is also derived. This expression is useful for the precise estimation of the surface normal vector and the outlier detection for finding correspondence in order to register multiple three-dimensional point clouds. Two practical algorithms using these expressions are presented: a method for finding optimal local neighbourhood size which minimizes the variance of the estimated normal vector and a resampling method of point clouds.
Effects of cyclone diameter on performance of 1D3D cyclones: Cut point and slope
Technology Transfer Automated Retrieval System (TEKTRAN)
Cyclones are a commonly used air pollution abatement device for separating particulate matter (PM) from air streams in industrial processes. Several mathematical models have been proposed to predict the cut point of cyclones as cyclone diameter varies. The objective of this research was to determine...
The Cauchy Problem for the 3-D Vlasov-Poisson System with Point Charges
NASA Astrophysics Data System (ADS)
Marchioro, Carlo; Miot, Evelyne; Pulvirenti, Mario
2011-07-01
In this paper we establish global existence and uniqueness of the solution to the three-dimensional Vlasov-Poisson system in the presence of point charges with repulsive interaction. The present analysis extends an analogous two-dimensional result (Caprino and Marchioro in Kinet. Relat. Models 3(2):241-254, 2010).
Observation of Magnetic Reconnection at a 3D Null Point Associated with a Solar Eruption
NASA Astrophysics Data System (ADS)
Sun, J. Q.; Zhang, J.; Yang, K.; Cheng, X.; Ding, M. D.
2016-10-01
Magnetic null has long been recognized as a special structure serving as a preferential site for magnetic reconnection (MR). However, the direct observational study of MR at null-points is largely lacking. Here, we show the observations of MR around a magnetic null associated with an eruption that resulted in an M1.7 flare and a coronal mass ejection. The Geostationary Operational Environmental Satellites X-ray profile of the flare exhibited two peaks at ∼02:23 UT and ∼02:40 UT on 2012 November 8, respectively. Based on the imaging observations, we find that the first and also primary X-ray peak was originated from MR in the current sheet (CS) underneath the erupting magnetic flux rope (MFR). On the other hand, the second and also weaker X-ray peak was caused by MR around a null point located above the pre-eruption MFR. The interaction of the null point and the erupting MFR can be described as a two-step process. During the first step, the erupting and fast expanding MFR passed through the null point, resulting in a significant displacement of the magnetic field surrounding the null. During the second step, the displaced magnetic field started to move back, resulting in a converging inflow and subsequently the MR around the null. The null-point reconnection is a different process from the current sheet reconnection in this flare; the latter is the cause of the main peak of the flare, while the former is the cause of the secondary peak of the flare and the conspicuous high-lying cusp structure.
Vectorial point spread function and optical transfer function in oblique plane imaging.
Kim, Jeongmin; Li, Tongcang; Wang, Yuan; Zhang, Xiang
2014-05-05
Oblique plane imaging, using remote focusing with a tilted mirror, enables direct two-dimensional (2D) imaging of any inclined plane of interest in three-dimensional (3D) specimens. It can image real-time dynamics of a living sample that changes rapidly or evolves its structure along arbitrary orientations. It also allows direct observations of any tilted target plane in an object of which orientational information is inaccessible during sample preparation. In this work, we study the optical resolution of this innovative wide-field imaging method. Using the vectorial diffraction theory, we formulate the vectorial point spread function (PSF) of direct oblique plane imaging. The anisotropic lateral resolving power caused by light clipping from the tilted mirror is theoretically analyzed for all oblique angles. We show that the 2D PSF in oblique plane imaging is conceptually different from the inclined 2D slice of the 3D PSF in conventional lateral imaging. Vectorial optical transfer function (OTF) of oblique plane imaging is also calculated by the fast Fourier transform (FFT) method to study effects of oblique angles on frequency responses.
Analytical derivation of the point spread function for pinhole collimators.
Bal, Girish; Acton, Paul D
2006-10-07
The point spread function (PSF) of a pinhole collimator plays an important role in determining the resolution and characterizing the sensitivity of the accepted photons from a given point in the image space. The focus of this paper is to derive an analytical expression for the PSF of two different types of focusing pinhole collimators that are based on (1) right-circular double cones and (2) oblique-circular double cones. Conventionally, focusing pinhole collimators used in multi-pinhole SPECT were designed using right-circular double cones, as they were easier to fabricate. In this work, a novel focusing collimator consisting of oblique-circular double cones was designed and its properties were studied in detail with respect to right-circular double-cone based collimators. The main advantage of determining the PSF is the fact that they can be used to accurately model the PSF during the reconstruction, thereby improving the resolution of the reconstructed image. The PSF of the focusing collimators based on oblique-circular cones were found to be almost shift invariant for low and medium energy photons (below 200 keV). This property is very advantageous as algorithms such as slice-by-slice reconstruction can be used for resolution recovery thereby drastically reducing the reconstruction time. However, the PSF of focusing oblique-circular double cones (FOCDC) for higher energy photons were found to be asymmetric and hence need to be modelled more accurately during the reconstruction. On the other hand, the PSF for the right-circular cone based collimators were found to be asymmetric for all energy levels. However, due to the smaller acceptance angle used, the number of penetration photons was found to be far less than that observed for oblique-circular cones. This results in a smaller PSF making right-circular cone based collimators preferable for high-resolution small animal imaging, especially where very small pinhole diameters are used. The analytically derived
NASA Astrophysics Data System (ADS)
Chiabrando, F.; Sammartano, G.; Spanò, A.
2016-06-01
This paper retraces some research activities and application of 3D survey techniques and Building Information Modelling (BIM) in the environment of Cultural Heritage. It describes the diffusion of as-built BIM approach in the last years in Heritage Assets management, the so-called Built Heritage Information Modelling/Management (BHIMM or HBIM), that is nowadays an important and sustainable perspective in documentation and administration of historic buildings and structures. The work focuses the documentation derived from 3D survey techniques that can be understood like a significant and unavoidable knowledge base for the BIM conception and modelling, in the perspective of a coherent and complete management and valorisation of CH. It deepens potentialities, offered by 3D integrated survey techniques, to acquire productively and quite easilymany 3D information, not only geometrical but also radiometric attributes, helping the recognition, interpretation and characterization of state of conservation and degradation of architectural elements. From these data, they provide more and more high descriptive models corresponding to the geometrical complexity of buildings or aggregates in the well-known 5D (3D + time and cost dimensions). Points clouds derived from 3D survey acquisition (aerial and terrestrial photogrammetry, LiDAR and their integration) are reality-based models that can be use in a semi-automatic way to manage, interpret, and moderately simplify geometrical shapes of historical buildings that are examples, as is well known, of non-regular and complex geometry, instead of modern constructions with simple and regular ones. In the paper, some of these issues are addressed and analyzed through some experiences regarding the creation and the managing of HBIMprojects on historical heritage at different scales, using different platforms and various workflow. The paper focuses on LiDAR data handling with the aim to manage and extract geometrical information; on
Buoyancy effects on the 3D MHD stagnation-point flow of a Newtonian fluid
NASA Astrophysics Data System (ADS)
Borrelli, A.; Giantesio, G.; Patria, M. C.; Roşca, N. C.; Roşca, A. V.; Pop, I.
2017-02-01
This work examines the steady three-dimensional stagnation-point flow of an electrically conducting Newtonian fluid in the presence of a uniform external magnetic field H0 under the Oberbeck-Boussinesq approximation. We neglect the induced magnetic field and examine the three possible directions of H0 which coincide with the directions of the axes. In all cases it is shown that the governing nonlinear partial differential equations admit similarity solutions. We find that the flow has to satisfy an ordinary differential problem whose solution depends on the Hartmann number M, the buoyancy parameter λ and the Prandtl number Pr. The skin-friction components along the axes are computed and the stagnation-point is classified. The numerical integration shows the existence of dual solutions and the occurrence of the reverse flow for some values of the parameters.
A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2016-04-01
Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of
WE-F-16A-02: Design, Fabrication, and Validation of a 3D-Printed Proton Filter for Range Spreading
Remmes, N; Courneyea, L; Corner, S; Beltran, C; Kemp, B; Kruse, J; Herman, M; Stoker, J
2014-06-15
Purpose: To design, fabricate and test a 3D-printed filter for proton range spreading in scanned proton beams. The narrow Bragg peak in lower-energy synchrotron-based scanned proton beams can result in longer treatment times for shallow targets due to energy switching time and plan quality degradation due to minimum monitor unit limitations. A filter with variable thicknesses patterned on the same scale as the beam's lateral spot size will widen the Bragg peak. Methods: The filter consists of pyramids dimensioned to have a Gaussian distribution in thickness. The pyramids are 2.5mm wide at the base, 0.6 mm wide at the peak, 5mm tall, and are repeated in a 2.5mm pseudo-hexagonal lattice. Monte Carlo simulations of the filter in a proton beam were run using TOPAS to assess the change in depth profiles and lateral beam profiles. The prototypes were constrained to a 2.5cm diameter disk to allow for micro-CT imaging of promising prototypes. Three different 3D printers were tested. Depth-doses with and without the prototype filter were then measured in a ~70MeV proton beam using a multilayer ion chamber. Results: The simulation results were consistent with design expectations. Prototypes printed on one printer were clearly unacceptable on visual inspection. Prototypes on a second printer looked acceptable, but the micro-CT image showed unacceptable voids within the pyramids. Prototypes from the third printer appeared acceptable visually and on micro-CT imaging. Depth dose scans using the prototype from the third printer were consistent with simulation results. Bragg peak width increased by about 3x. Conclusions: A prototype 3D printer pyramid filter for range spreading was successfully designed, fabricated and tested. The filter has greater design flexibility and lower prototyping and production costs compared to traditional ridge filters. Printer and material selection played a large role in the successful development of the filter.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds
NASA Astrophysics Data System (ADS)
Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.
2012-12-01
The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.
NASA Astrophysics Data System (ADS)
Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain
2016-04-01
The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and
NASA Astrophysics Data System (ADS)
Hild, Michael; Yoshida, Kazunobu; Hashimoto, Motonobu
2003-03-01
A method for recognizing faces in relativley unconstrained environments, such as offices, is described. It can recognize faces occurring over an extended range of orientations and distances relative to the camera. As the pattern recognition mechanism, a bank of small neural networks of the multilayer perceptron type is used, where each perceptron has the task of recognizing only a single person's face. The perceptrons are trained with a set of nine face images representing the nine main facial orientations of the person to be identified, and a set face images from various other persons. The center of the neck is determined as the reference point for face position unification. Geometric normalization and reference point determination utilizes 3-D data point measurements obtained with a stereo camera. The system achieves a recognition rate of about 95%.
NASA Astrophysics Data System (ADS)
Dittrich, André; Weinmann, Martin; Hinz, Stefan
2017-04-01
In photogrammetry, remote sensing, computer vision and robotics, a topic of major interest is represented by the automatic analysis of 3D point cloud data. This task often relies on the use of geometric features amongst which particularly the ones derived from the eigenvalues of the 3D structure tensor (e.g. the three dimensionality features of linearity, planarity and sphericity) have proven to be descriptive and are therefore commonly involved for classification tasks. Although these geometric features are meanwhile considered as standard, very little attention has been paid to their accuracy and robustness. In this paper, we hence focus on the influence of discretization and noise on the most commonly used geometric features. More specifically, we investigate the accuracy and robustness of the eigenvalues of the 3D structure tensor and also of the features derived from these eigenvalues. Thereby, we provide both analytical and numerical considerations which clearly reveal that certain features are more susceptible to discretization and noise whereas others are more robust.
Lee, Myung W.
2005-01-01
In order to assess the resource potential of gas hydrate deposits in the North Slope of Alaska, 3-D seismic and well data at Milne Point were obtained from BP Exploration (Alaska), Inc. The well-log analysis has three primary purposes: (1) Estimate gas hydrate or gas saturations from the well logs; (2) predict P-wave velocity where there is no measured P-wave velocity in order to generate synthetic seismograms; and (3) edit P-wave velocities where degraded borehole conditions, such as washouts, affected the P-wave measurement significantly. Edited/predicted P-wave velocities were needed to map the gas-hydrate-bearing horizons in the complexly faulted upper part of 3-D seismic volume. The estimated gas-hydrate/gas saturations from the well logs were used to relate to seismic attributes in order to map regional distribution of gas hydrate inside the 3-D seismic grid. The P-wave velocities were predicted using the modified Biot-Gassmann theory, herein referred to as BGTL, with gas-hydrate saturations estimated from the resistivity logs, porosity, and clay volume content. The effect of gas on velocities was modeled using the classical Biot-Gassman theory (BGT) with parameters estimated from BGTL.
NASA Astrophysics Data System (ADS)
Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria
2015-04-01
Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.
Registration of 3D point clouds and meshes: a survey from rigid to nonrigid.
Tam, Gary K L; Cheng, Zhi-Quan; Lai, Yu-Kun; Langbein, Frank C; Liu, Yonghuai; Marshall, David; Martin, Ralph R; Sun, Xian-Fang; Rosin, Paul L
2013-07-01
Three-dimensional surface registration transforms multiple three-dimensional data sets into the same coordinate system so as to align overlapping components of these sets. Recent surveys have covered different aspects of either rigid or nonrigid registration, but seldom discuss them as a whole. Our study serves two purposes: 1) To give a comprehensive survey of both types of registration, focusing on three-dimensional point clouds and meshes and 2) to provide a better understanding of registration from the perspective of data fitting. Registration is closely related to data fitting in which it comprises three core interwoven components: model selection, correspondences and constraints, and optimization. Study of these components 1) provides a basis for comparison of the novelties of different techniques, 2) reveals the similarity of rigid and nonrigid registration in terms of problem representations, and 3) shows how overfitting arises in nonrigid registration and the reasons for increasing interest in intrinsic techniques. We further summarize some practical issues of registration which include initializations and evaluations, and discuss some of our own observations, insights and foreseeable research trends.
3D Elastic Solutions for Laterally Loaded Discs: Generalised Brazilian and Point Load Tests
NASA Astrophysics Data System (ADS)
Serati, Mehdi; Alehossein, Habib; Williams, David J.
2014-07-01
This paper investigates the application of a double Fourier series technique to the construction of an elastic stress field in a cylindrical bar subject to lateral boundary loads. The lateral loads, including the constant load boundary conditions, are represented by two Fourier series: one on the perimeter of the circular section ( r 0, θ) and the other on the longitudinal curved surface parallel to the bar axis ( z). The technique invokes acceptable potential functions of the Papkovich-Neuber displacement field, satisfying the governing partial differential equations, to assign appropriate odd and even trigonometric Fourier terms in cylindrical coordinates ( r, θ, z). The generic solution decomposes the problem of interest to a state of stress caused by two independent boundary conditions along the z axis and θ-polar angle, both superimposed on a solution for which these potentials are the product of the trigonometric terms of the independent variables ( θ, z). Constants appearing in the resultant second-order partial differential equations are determined from the generally mixed (tractions and/or displacements) boundary conditions. While the solutions are satisfied exactly at the ends of an infinite bar, they are satisfied weakly on average, in the light of Saint Venant's approximation at the two ends of a finite bar. The application of the proposed analysis is verified against available elastic solutions for axisymmetric and non-axisymmetric engineering problems such as the indirect Brazilian Tensile Strength and Point Load Strength tests.
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
NASA Astrophysics Data System (ADS)
Abellan, A.; Carrea, D.; Jaboyedoff, M.; Riquelme, A.; Tomas, R.; Royan, M. J.; Vilaplana, J. M.; Gauvin, N.
2014-12-01
The acquisition of dense terrain information using well-established 3D techniques (e.g. LiDAR, photogrammetry) and the use of new mobile platforms (e.g. Unmanned Aerial Vehicles) together with the increasingly efficient post-processing workflows for image treatment (e.g. Structure From Motion) are opening up new possibilities for analysing, modeling and predicting rock slope failures. Examples of applications at different scales ranging from the monitoring of small changes at unprecedented level of detail (e.g. sub millimeter-scale deformation under lab-scale conditions) to the detection of slope deformation at regional scale. In this communication we will show the main accomplishments of the Swiss National Foundation project "Characterizing and analysing 3D temporal slope evolution" carried out at Risk Analysis group (Univ. of Lausanne) in close collaboration with the RISKNAT and INTERES groups (Univ. of Barcelona and Univ. of Alicante, respectively). We have recently developed a series of innovative approaches for rock slope analysis using 3D point clouds, some examples include: the development of semi-automatic methodologies for the identification and extraction of rock-slope features such as discontinuities, type of material, rockfalls occurrence and deformation. Moreover, we have been improving our knowledge in progressive rupture characterization thanks to several algorithms, some examples include the computing of 3D deformation, the use of filtering techniques on permanently based TLS, the use of rock slope failure analogies at different scales (laboratory simulations, monitoring at glacier's front, etc.), the modelling of the influence of external forces such as precipitation on the acceleration of the deformation rate, etc. We have also been interested on the analysis of rock slope deformation prior to the occurrence of fragmental rockfalls and the interaction of this deformation with the spatial location of future events. In spite of these recent advances
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
Dorninger, Peter; Pfeifer, Norbert
2008-11-17
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.
A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface
Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue
2015-01-01
Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112
Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models
NASA Astrophysics Data System (ADS)
Lachat, E.; Landes, T.; Grussenmeyer, P.
2016-06-01
The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
NASA Astrophysics Data System (ADS)
Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.
2016-02-01
We revisit, both numerically and analytically, the finite-time blowup of the infinite-energy solution of 3D Euler equations of stagnation-point-type introduced by Gibbon et al. (1999). By employing the method of mapping to regular systems, presented in Bustamante (2011) and extended to the symmetry-plane case by Mulungye et al. (2015), we establish a curious property of this solution that was not observed in early studies: before but near singularity time, the blowup goes from a fast transient to a slower regime that is well resolved spectrally, even at mid-resolutions of $512^2.$ This late-time regime has an atypical spectrum: it is Gaussian rather than exponential in the wavenumbers. The analyticity-strip width decays to zero in a finite time, albeit so slowly that it remains well above the collocation-point scale for all simulation times $t < T^* - 10^{-9000}$, where $T^*$ is the singularity time. Reaching such a proximity to singularity time is not possible in the original temporal variable, because floating point double precision ($\\approx 10^{-16}$) creates a `machine-epsilon' barrier. Due to this limitation on the \\emph{original} independent variable, the mapped variables now provide an improved assessment of the relevant blowup quantities, crucially with acceptable accuracy at an unprecedented closeness to the singularity time: $T^*- t \\approx 10^{-140}.$
Point spread function of the optical needle super-oscillatory lens
Roy, Tapashree; Rogers, Edward T. F.; Yuan, Guanghui; Zheludev, Nikolay I.
2014-06-09
Super-oscillatory optical lenses are known to achieve sub-wavelength focusing. In this paper, we analyse the imaging capabilities of a super-oscillatory lens by studying its point spread function. We experimentally demonstrate that a super-oscillatory lens can generate a point spread function 24% smaller than that dictated by the diffraction limit and has an effective numerical aperture of 1.31 in air. The object-image linear displacement property of these lenses is also investigated.
NASA Astrophysics Data System (ADS)
Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.
2016-06-01
This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model. PMID:25372707
Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou
2014-02-01
fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.
An OpenGL-based Interface to 3D PowerPoint-like Presentations of OpenGL Projects
NASA Astrophysics Data System (ADS)
Mokhov, Serguei A.; Song, Miao
We present a multimedia 3D interface to powerpoint-like presentations in OpenGL. The presentations of such kind are useful to demonstrate projects or conference talks with the demonstration results of a 3D animation, effects, and others alongside the presentation 'in situ' instead of switching between a regular presentation software to the demo and back - the demo and the presentation can be one and the same, embedded together.
NASA Astrophysics Data System (ADS)
Meulien Ohlmann, Odile
2013-02-01
Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?
Hubble Space Telescope Faint Object Camera calculated point-spread functions.
Lyon, R G; Dorband, J E; Hollis, J M
1997-03-10
A set of observed noisy Hubble Space Telescope Faint Object Camera point-spread functions is used to recover the combined Hubble and Faint Object Camera wave-front error. The low-spatial-frequency wave-front error is parameterized in terms of a set of 32 annular Zernike polynomials. The midlevel and higher spatial frequencies are parameterized in terms of set of 891 polar-Fourier polynomials. The parameterized wave-front error is used to generate accurate calculated point-spread functions, both pre- and post-COSTAR (corrective optics space telescope axial replacement), suitable for image restoration at arbitrary wavelengths. We describe the phase-retrieval-based recovery process and the phase parameterization. Resultant calculated precorrection and postcorrection point-spread functions are shown along with an estimate of both pre- and post-COSTAR spherical aberration.
Scattering and the Point Spread Function of the New Generation Space Telescope
NASA Technical Reports Server (NTRS)
Schreur, Julian J.
1996-01-01
Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called
Roels, Joris; Aelterman, Jan; De Vylder, Jonas; Hiep Luong; Saeys, Yvan; Philips, Wilfried
2016-08-01
Microscopy is one of the most essential imaging techniques in life sciences. High-quality images are required in order to solve (potentially life-saving) biomedical research problems. Many microscopy techniques do not achieve sufficient resolution for these purposes, being limited by physical diffraction and hardware deficiencies. Electron microscopy addresses optical diffraction by measuring emitted or transmitted electrons instead of photons, yielding nanometer resolution. Despite pushing back the diffraction limit, blur should still be taken into account because of practical hardware imperfections and remaining electron diffraction. Deconvolution algorithms can remove some of the blur in post-processing but they depend on knowledge of the point-spread function (PSF) and should accurately regularize noise. Any errors in the estimated PSF or noise model will reduce their effectiveness. This paper proposes a new procedure to estimate the lateral component of the point spread function of a 3D scanning electron microscope more accurately. We also propose a Bayesian maximum a posteriori deconvolution algorithm with a non-local image prior which employs this PSF estimate and previously developed noise statistics. We demonstrate visual quality improvements and show that applying our method improves the quality of subsequent segmentation steps.
ERIC Educational Resources Information Center
Smith, Garon C.; Hossain, Md Mainul
2016-01-01
BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…
ERIC Educational Resources Information Center
Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick
2014-01-01
3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…
The point spread function of the soft X-ray telescope aboard Yohkoh
NASA Technical Reports Server (NTRS)
Martens, Petrus C.; Acton, Loren W.; Lemen, James R.
1995-01-01
The point spread function of the SXT telescope aboard Yohkoh has been measured in flight configuration in three different X-ray lines at White Sands Missile Range. We have fitted these data with an elliptical generalization of the Moffat function. Our fitting method consists of chi squared minimization in Fourier space, especially designed for matching of sharply peaked functions. We find excellent fits with a reduced chi squared of order unity or less for single exposure point spread functions over most of the CCD. Near the edges of the CCD the fits are less accurate due to vignetting. From fitting results with summation of multiple exposures we find a systematic error in the fitting function of the order of 3% near the peak of the point spread function, which is close to the photon noise for typical SXT images in orbit. We find that the full width to half maximum and fitting parameters vary significantly with CCD location. However, we also find that point spread functions measured at the same location are consistent to one another within the limit determined by photon noise. A 'best' analytical fit to the PSF as function of position on the CCD is derived for use in SXT image enhancemnent routines. As an aside result we have found that SXT can determine the location of point sources to about a quarter of a 2.54 arc sec pixel.
Inks, T.L.; Agena, W.F.
2008-01-01
In February 2007, the Mt. Elbert Prospect stratigraphic test well, Milne Point, North Slope Alaska encountered thick methane gas hydrate intervals, as predicted by 3D seismic interpretation and modeling. Methane gas hydrate-saturated sediment was found in two intervals, totaling more than 100 ft., identified and mapped based on seismic character and wavelet modeling.
NASA Astrophysics Data System (ADS)
Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves
2015-04-01
Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.
Bae, Kwang-Ho
2009-01-01
Using three dimensional point clouds from both simulated and real datasets from close and terrestrial laser scanners, the rotational and translational convergence regions of Geometric Primitive Iterative Closest Points (GP-ICP) are empirically evaluated. The results demonstrate the GP-ICP has a larger rotational convergence region than the existing methods, e.g., the Iterative Closest Point (ICP).
NASA Astrophysics Data System (ADS)
Poręba, M.; Goulette, F.
2014-12-01
The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing and documenting the surrounding environment. This procedure involves three main stages: 1) choosing appropriate features, which can be reliably extracted; 2) matching conjugate primitives; 3) estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP) algorithm or its variation. Alternatively, planar and linear feature-based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low-density point clouds. Working with synthetic data, several existing line-based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since an automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence-finding algorithm is developed. The tests show that our matching algorithm provides a correct p airing with an accuracy of 99 % at least, and about 8% of omitted line pairs.
Formation Dirac point and the topological surface states for HgCdTe-QW and mixed 3D HgCdTe TI
NASA Astrophysics Data System (ADS)
Marchewka, Michał
2016-12-01
In this paper the results of numerical calculations based on the finite difference method (FDM) for the 2D and 3D TI with and without uniaxial tensile strain for mixed Hg1-xCdxTe structures are presented. The numerical calculations were made using the 8×8 model for x from 0 up to 0.155 and for the wide range for the thickness from a few nm for 2D up to 150 nm for 3D TI as well as for different mismatch of the lattice constant and different barrier potential in the case of the QW. For the investigated region of the Cd composition (x value) the negative energy gap (Eg=Γ8-Γ6) in the Hg1-xCdxTe is smaller than in the case of pure HgTe which, as it turns out, has a significant influence on the topological surface states (TSS) and the position of the Dirac point for QW as well as for 3D TI. The results show that the strained gap and the position of the Dirac point against the Γ8 is a function of the x-Cd compounds in the case of the 3D TI as well as the critical width of the mixed Hg1-xCdxTe QW.
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the
3d-3d correspondence revisited
Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...
2016-04-21
In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.
STRONG GRAVITATIONAL LENS MODELING WITH SPATIALLY VARIANT POINT-SPREAD FUNCTIONS
Rogers, Adam; Fiege, Jason D.
2011-12-10
Astronomical instruments generally possess spatially variant point-spread functions, which determine the amount by which an image pixel is blurred as a function of position. Several techniques have been devised to handle this variability in the context of the standard image deconvolution problem. We have developed an iterative gravitational lens modeling code called Mirage that determines the parameters of pixelated source intensity distributions for a given lens model. We are able to include the effects of spatially variant point-spread functions using the iterative procedures in this lensing code. In this paper, we discuss the methods to include spatially variant blurring effects and test the results of the algorithm in the context of gravitational lens modeling problems.
2-D and 3-D Heliospheric Imaging from LEO, L1 and L5: Instruments, Vantage Points, and Applications
NASA Astrophysics Data System (ADS)
DeForest, C. E.
2015-12-01
Heliospheric imaging has come of age scientifically, and multiple heliospheric imagers are either operating or being built to operate on scientific missions. Much study and effort has been put into the advantages of solar wind imaging for space weather prediction. For example, CME tracking (either in 3-D with polarization, or in an image plane from a vantage far from Earth) has the potential to greatly increase arrival time predictions. Likewise, higher spatial and temporal resolution could provide critical clues about the important N/S component of the entrained magnetic field, by connecting signed surface magnetograms of the Sun to particular structures observed in the corona and, later, in the ICME. I will discuss the current state of understanding of polarized and/or high resolution heliospheric imaging as it relates to space weather forecasting, the relative advantages of an instrument at LEO, L1, or L5, and desiderata to exploit currently-validated and under-consideration techniques in an operational, prototype, or scientific next-generation solar wind imaging experiment.
NASA Astrophysics Data System (ADS)
Tian, X.; Choi, E.; Buck, W. R.
2015-12-01
The offset of faults and related topographic relief varies hugely at both continental rifts and mid-ocean ridges (MORs). In some areas fault offset is measured in 10s of meters while in places marked by core complexes it is measured in 10s of kilometers. Variation in the magma supply is thought to control much of these differences. Magma supply is most usefully described by the ratio (M) between rates of lithospheric extension accommodated by magmatic dike intrusion and that occurring via faulting. 2D models with different values of M successfully explain much of the observed cross-sectional structure seen at rifts and ridges. However, magma supply varies along the axis of extension and the interactions between the tectonics and magmatism are inevitably three-dimensional. We investigate the consequences of this along-axis variation in diking in terms of faulting patterns and the associated structures using a 3D parallel geodynamic modeling code, SNAC. Many observed 3D structural features are reproduced: e.g., abyssal hill, oceanic core complex (OCC), inward fault jump, mass wasting, hourglass-shaped median valley, corrugation and mullion structure. An estimated average value of M = 0.65 is suggested as a boundary value for separating abyssal hills and OCCs formation. Previous inconsistency in the M range for OCC formation between 2D model results (M = 0.3˜0.5) and field observations (M < 0.3 or M > 0.5) is reconciled by the along-ridge coupling between different faulting regimes. We also propose asynchronous faulting-induced tensile failure as a new possibility for explaining corrugations seen on the surface of core complexes. For continental rifts, we will describe a suite of 2D and 3D model calculations with a range of initial lithospheric structures and values of M. In one set of the 2D models we limit the extensional tectonic force and show how this affects the maximum topographic relief produced across the rift. We are also interested in comparing models in
Lee, Larissa J.; Sadow, Cheryl A.; Russell, Anthony; Viswanathan, Akila N.
2009-11-01
Purpose: To compare high dose rate (HDR) point B to pelvic lymph node dose using three-dimensional-planned brachytherapy for cervical cancer. Methods and Materials: Patients with FIGO Stage IB-IIIB cervical cancer received 70 tandem HDR applications using CT-based treatment planning. The obturator, external, and internal iliac lymph nodes (LN) were contoured. Per fraction (PF) and combined fraction (CF) right (R), left (L), and bilateral (Bil) nodal doses were analyzed. Point B dose was compared with LN dose-volume histogram (DVH) parameters by paired t test and Pearson correlation coefficients. Results: Mean PF and CF doses to point B were R 1.40 Gy +- 0.14 (CF: 7 Gy), L 1.43 +- 0.15 (CF: 7.15 Gy), and Bil 1.41 +- 0.15 (CF: 7.05 Gy). The correlation coefficients between point B and the D100, D90, D50, D2cc, D1cc, and D0.1cc LN were all less than 0.7. Only the D2cc to the obturator and the D0.1cc to the external iliac nodes were not significantly different from the point B dose. Significant differences between R and L nodal DVHs were seen, likely related to tandem deviation from irregular tumor anatomy. Conclusions: With HDR brachytherapy for cervical cancer, per fraction nodal dose approximates a dose equivalent to teletherapy. Point B is a poor surrogate for dose to specific nodal groups. Three-dimensional defined nodal contours during brachytherapy provide a more accurate reflection of delivered dose and should be part of comprehensive planning of the total dose to the pelvic nodes, particularly when there is evidence of pathologic involvement.
DeLong, Stephen B.
2016-01-01
Point cloud data collected along a 500 meter portion of the 2014 South Napa Earthquake surface rupture near Cuttings Wharf Road, Napa, CA, USA. The data include 7 point cloud files (.laz). The files are named with the location and date of collection and either ALSM for airborne laser scanner data or TLS for terrestrial laser scanner data. The ALSM data re previously released but are included here because they have been precisely aligned with the TLS data as described in the processing section of this metadata.
NASA Astrophysics Data System (ADS)
Lukashenko, A. T.; Veselovsky, I. S.
2015-12-01
General principles of describing secondand higher-order null points of a potential magnetic field are formulated. The potential near a second-order null of the general form can be specified by a linear combination of four basic functions, the list of which is presented. Near secondand higher-order null points, field line equations often cannot be integrated analytically; however, in some cases, it is possible to present a qualitative description of the geometry of null vicinities with consideration of the behavior of field lines near rays outgoing from null, at which the field is radial or equals zero.
Nasehi Tehrani, J; Wang, J; Guo, X; Yang, Y
2014-06-01
Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.
Dynamic topology and flux rope evolution during non-linear tearing of 3D null point current sheets
Wyper, P. F. Pontin, D. I.
2014-10-15
In this work, the dynamic magnetic field within a tearing-unstable three-dimensional current sheet about a magnetic null point is described in detail. We focus on the evolution of the magnetic null points and flux ropes that are formed during the tearing process. Generally, we find that both magnetic structures are created prolifically within the layer and are non-trivially related. We examine how nulls are created and annihilated during bifurcation processes, and describe how they evolve within the current layer. The type of null bifurcation first observed is associated with the formation of pairs of flux ropes within the current layer. We also find that new nulls form within these flux ropes, both following internal reconnection and as adjacent flux ropes interact. The flux ropes exhibit a complex evolution, driven by a combination of ideal kinking and their interaction with the outflow jets from the main layer. The finite size of the unstable layer also allows us to consider the wider effects of flux rope generation. We find that the unstable current layer acts as a source of torsional magnetohydrodynamic waves and dynamic braiding of magnetic fields. The implications of these results to several areas of heliophysics are discussed.
Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun
2015-01-01
Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133
The point-spread function of fiber-coupled area detectors
Holton, James M.; Nielsen, Chris; Frankel, Kenneth A.
2012-01-01
The point-spread function (PSF) of a fiber-optic taper-coupled CCD area detector was measured over five decades of intensity using a 20 µm X-ray beam and ∼2000-fold averaging. The ‘tails’ of the PSF clearly revealed that it is neither Gaussian nor Lorentzian, but instead resembles the solid angle subtended by a pixel at a point source of light held a small distance (∼27 µm) above the pixel plane. This converges to an inverse cube law far from the beam impact point. Further analysis revealed that the tails are dominated by the fiber-optic taper, with negligible contribution from the phosphor, suggesting that the PSF of all fiber-coupled CCD-type detectors is best described as a Moffat function. PMID:23093762
NASA Astrophysics Data System (ADS)
Wang, Haimin; Liu, C.
2012-05-01
In recent studies by Pariat, Antiochos and DeVore (2009, 2010), fan-separatrix topology and magnetic reconnection at the null-point were simulated and found to produce homologous jets. This motivates us to search for axisymmetric magnetic structure and associated flaring/jetting activity. Using high-resolution ( 0.15" per pixel) and high-cadence ( 15 s) H-alpha center/offband observations obtained from the recently digitized films of Big Bear Solar Observatory, we were able to identify five large circular flares with associated surges. All the events exhibit a central parasite magnetic field surrounded by opposite polarity, forming a circular polarity inversion line (PIL). Consequently, a compact flare kernel at the center is surrounded by a circular ribbon, and together with the upward ejecting dark surge, these seem to depict a dome-like magnetic structure. Very interestingly, (1) the circular ribbon brightens sequentially rather than simultaneously, (2) the central compact flare kernel shows obvious motion, and (3) a remote elongated, co-temporal flare ribbon at a region with the same polarity as the central parasite site is seen in the series of four homologous events on 1991 March 17 and 18. The remote ribbon is 120" away from the jet location. Moreover, magnetic reconnection across the circular PIL is evident from the magnetic flux cancellation. These rarely observed homologous surges with circular as well as central and remote flare ribbons provide valuable evidence concerning the dynamics of magnetic reconnection in a null-point topology. This study is dedicated to Professor Hal Zirin, the founder of Big Bear Solar Observatory, who passed away on January 3, 2012.
NASA Astrophysics Data System (ADS)
Peronato, G.; Rey, E.; Andersen, M.
2016-10-01
The presence of vegetation can significantly affect the solar irradiation received on building surfaces. Due to the complex shape and seasonal variability of vegetation geometry, this topic has gained much attention from researchers. However, existing methods are limited to rooftops as they are based on 2.5D geometry and use simplified radiation algorithms based on view-sheds. This work contributes to overcoming some of these limitations, providing support for 3D geometry to include facades. Thanks to the use of ray-tracing-based simulations and detailed characterization of the 3D surfaces, we can also account for inter-reflections, which might have a significant impact on façade irradiation. In order to construct confidence intervals on our results, we modeled vegetation from LiDAR point clouds as 3D convex hulls, which provide the biggest volume and hence the most conservative obstruction scenario. The limits of the confidence intervals were characterized with some extreme scenarios (e.g. opaque trees and absence of trees). Results show that uncertainty can vary significantly depending on the characteristics of the urban area and the granularity of the analysis (sensor, building and group of buildings). We argue that this method can give us a better understanding of the uncertainties due to vegetation in the assessment of solar irradiation in urban environments, and therefore, the potential for the installation of solar energy systems.
NASA Astrophysics Data System (ADS)
Manousakis, J.; Zekkos, D.; Saroglou, F.; Clark, M.
2016-10-01
UAVs are expected to be particularly valuable to define topography for natural slopes that may be prone to geological hazards, such as landslides or rockfalls. UAV-enabled imagery and aerial mapping can lead to fast and accurate qualitative and quantitative results for photo documentation as well as basemap 3D analysis that can be used for geotechnical stability analyses. In this contribution, the case study of a rockfall near Ponti village that was triggered during the November 17th 2015 Mw 6.5 earthquake in Lefkada, Greece is presented with a focus on feature recognition and 3D terrain model development for use in rockfall hazard analysis. A significant advantage of the UAV was the ability to identify from aerial views the rockfall trajectory along the terrain, the accuracy of which is crucial to subsequent geotechnical back-analysis. Fast static GPS control points were measured for optimizing internal and external camera parameters and model georeferencing. Emphasis is given on an assessment of the error associated with the basemap when fewer and poorly distributed ground control points are available. Results indicate that spatial distribution and image occurrences of control points throughout the mapped area and image block is essential in order to produce accurate geospatial data with minimum distortions.
Measurement of Phased Array Point Spread Functions for Use with Beamforming
NASA Technical Reports Server (NTRS)
Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis
2011-01-01
Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.
NASA Astrophysics Data System (ADS)
Mighell, K. J.
2004-12-01
I describe the key features of my MATPHOT algorithm for accurate and precise stellar photometry and astrometry using discrete Point Spread Functions. A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. The MATPHOT algorithm shifts discrete PSFs within an observational model using a 21-pixel-wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. The MATPHOT algorithm achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled 2, 3, or more times more finely than the observational data. I have written a C-language computer program called MPD which is based on the current implementation of the MATPHOT algorithm; all source code and documentation for MPD and support software is freely available at the following website: http://www.noao.edu/staff/mighell/matphot . I demonstrate the use of MPD and present a detailed MATPHOT analysis of simulated James Webb Space Telescope observations which demonstrates that millipixel relative astrometry and millimag photometric accuracy is achievable with very complicated space-based discrete PSFs. This work was supported by a grant from the National Aeronautics and Space Administration (NASA), Interagency Order No. S-13811-G, which was awarded by the Applied Information Systems Research (AISR) Program of NASA's Science Mission Directorate.
NASA Astrophysics Data System (ADS)
Vianna Baptista, M. L.
2013-07-01
Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers' intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry) were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.
Simulation of the shape from focus method using polychromatic point spread function
NASA Astrophysics Data System (ADS)
Hamarová, I.; Šmíd, P.; Horváth, P.
2016-12-01
Design of a model of a sensor based on the Shape from focus method is presented. The model uses polychromatic point spread functions of a generalized aperture function of lens and their convolution with an ideal image. The model approaches the reality and allows one to employ parameters of real components of the corresponding sensor, e.g. a spectrum of a light source, a dispersion function of a real imaging optical system and spectral sensitivity of a real light sensitive sensor. The model enables to study accuracy and reliability of the determination of the object's surface topography by means of the Shape from focus method.
NASA Astrophysics Data System (ADS)
Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert
2015-04-01
Shell beds are key features in sedimentary records throughout the Phanerozoic. The interplay between burial rates and population productivity is reflected in distinct degrees of shelliness. Consequently, shell beds may provide informations on various physical processes, which led to the accumulation and preservation of hard parts. Many shell beds pass through a complex history of formation being shaped by more than one factor. In shallow marine settings, the composition of shell beds is often strongly influenced by winnowing, reworking and transport. These processes may cause considerable time averaging and the accumulation of specimens, which have lived thousands of years apart. In the best case, the environment remained stable during that time span and the mixing does not mask the overall composition. A major obstacle for the interpretation of shell beds, however, is the amalgamation of shell beds of several depositional units in a single concentration, as typically for tempestites and tsunamites. Disentangling such mixed assemblages requires deep understanding of the ecological requirements of the taxa involved - which is achievable for geologically young shell beds with living relatives - and a statistic approach to quantify the contribution by the various death assemblages. Furthermore it requires understanding of sedimentary processes potentially involved into their formation. Here we present the first attempt to describe and decipher such a multi-phase shell-bed based on a high resolution digital surface model (1 mm) combined with ortho-photos with a resolution of 0.5 mm per pixel. Documenting the oyster reef requires precisely georeferenced data; owing to high redundancy of the point cloud an accuracy of a few mm was achieved. The shell accumulation covers an area of 400 m2 with thousands of specimens, which were excavated by a three months campaign at Stetten in Lower Austria. Formed in an Early Miocene estuary of the Paratethys Sea it is mainly composed
Iliesiu, Luca; Kos, Filip; Poland, David; ...
2016-03-17
We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.
Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran
2016-03-17
We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C_{T}. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.
The Effects of Instrumental Elliptical Polarization on Stellar Point Spread Function Fine Structure
NASA Technical Reports Server (NTRS)
Carson, Joseph C.; Kern, Brian D.; Breckinridge, James B.; Trauger, John T.
2005-01-01
We present procedures and preliminary results from a study on the effects of instrumental polarization on the fine structure of the stellar point spread function (PSF). These effects are important to understand because the the aberration caused by instrumental polarization on an otherwise diffraction-limited will likely have have severe consequences for extreme high contrast imaging systems such as NASA's planned Terrestrial Planet Finder (TPF) mission and the proposed NASA Eclipse mission. The report here, describing our efforts to examine these effects, includes two parts: 1) a numerical analysis of the effect of metallic reflection, with some polarization-specific retardation, on a spherical wavefront; 2) an experimental approach for observing this effect, along with some preliminary laboratory results. While the experimental phase of this study requires more fine-tuning to produce meaningful results, the numerical analysis indicates that the inclusion of polarization-specific phase effects (retardation) results in a point spread function (PSF) aberration more severe than the amplitude (reflectivity) effects previously recorded in the literature.
NASA Astrophysics Data System (ADS)
Caulier, Yannick; Bernhard, Luc; Spinnler, Klaus
2011-05-01
This paper proposes a new type of color coded light structures for the inspection of complex moving objects. The novelty of the methods relies on the generation of free-form color patterns permitting the projection of color structures adapted to the geometry of the surfaces to be characterized. The point correspondence determination algorithm consists of a stepwise procedure involving simple and computationally fast methods. The algorithm is therefore robust against varying recording conditions typically arising in real-time quality control environments and can be further integrated for industrial inspection purposes. The proposed approach is validated and compared on the basis of different experimentations concerning the 3D surface reconstruction by projecting adapted spatial color coded patterns. It is demonstrated that in case of certain inspection requirements, the method permits to code more reference points that similar color coded matrix methods.
NASA Astrophysics Data System (ADS)
Marchewka, Michał
2016-10-01
In this paper the results of the numerical calculation obtained for the three-dimensional (3D) strained Hg1-xCdx Te layers for the x-Cd composition from 0.1 to 0.155 and a different mismatch of the lattice constant are presented. For the investigated region of the Cd composition (x value) the negative energy gap (Eg =Γ8 -Γ6) in the Hg1-xCdx Te is smaller than in the case of pure HgTe which, as it turns out, has a significant influence on the topological surface states (TSS) and the position of the Dirac point. The numerical calculation based on the finite difference method applied for the 8×8 kp model with the in-plane tensile strain for (001) growth oriented structure shows that the Dirac cone inside the induced insulating band gap for non zero of the Cd composition and a bigger strain caused by the bigger lattice mismatch (than for the 3D HgTe TI) can be obtained. It was also shown how different x-Cd compounds move the Dirac cone from the valence band into the band gap. The presented results show that 75 nm wide 3D Hg1-xCdx Te structures with x ≈ 0.155 and 1.6% lattice mismatch make the system a true topological insulator with the dispersion of the topological surface states similar to those ones obtained for the strained CdTe/HgTe QW.
NASA Astrophysics Data System (ADS)
Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich
2000-04-01
Individual region-of-interest atlas extraction consists of two main parts: T1-weighted MRI grayscale images are classified into brain tissues types (gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB), background (BG)), followed by class image analysis to define automatically meaningful ROIs (e.g., cerebellum, cerebral lobes, etc.). The purpose of this algorithm is the automatic detection of training points for neural network-based classification of brain tissue types. One transaxial slice of the patient data set is analyzed. Background separation is done by simple region growing. A random generator extracts spatially uniformly distributed training points of class BG from that region. For WM training point extraction (TPE), the homogeneity operator is the most important. The most homogeneous voxels define the region for WM TPE. They are extracted by analyzing the cumulative histogram of the homogeneity operator response. Assuming a Gaussian gray value distribution in WM, a random number is used as a probabilistic threshold for TPE. Similarly, non-white matter and non-background regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is an additional feature. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated.
Angular motion point spread function model considering aberrations and defocus effects.
Klapp, Iftach; Yitzhaky, Yitzhak
2006-08-01
When motion blur is considered, the optics point spread function (PSF) is conventionally assumed to be fixed, and therefore cascading of the motion optical transfer function (OTF) with the optics OTF is allowed. However, in angular motion conditions, the image is distorted by space-variant effects of wavefront aberrations, defocus, and motion blur. The proposed model considers these effects and formulates a combined space-variant PSF obtained from the angle-dependent optics PSF and the motion PSF that acts as a weighting function. Results of comparison of the new angular-motion-dependent PSF and the traditional PSF show significant differences. To simplify the proposed model, an efficient approximation is suggested and evaluated.
First Observation of the Point Spread Function of Optical Transition Radiation
NASA Astrophysics Data System (ADS)
Karataev, Pavel; Aryshev, Alexander; Boogert, Stewart; Howell, David; Terunuma, Nobuhiro; Urakawa, Junji
2011-10-01
We represent the first experimental observation of the point spread function (PSF) of optical transition radiation (OTR) performed at KEK-Accelerator Test Facility extraction line. We have demonstrated that the PSF vertical polarization component has a central minimum with a two lobe distribution. However, the distribution width varied significantly with wavelength. We assume that we observed a severe effect from spherical or chromatic aberrations which are not taken into account in any existing theoretical model. We believe that the result of this work will encourage theoreticians to continue developing the theory as it is important for various transition radiation applications. Nonuniform distribution of the OTR PSF creates an opportunity for developing a submicrometer transverse beam size monitor.
NASA Astrophysics Data System (ADS)
Rasmussen, Andrew; Guyonnet, Augustin; Lage, Craig; Antilogus, Pierre; Astier, Pierre; Doherty, Peter; Gilmore, Kirk; Kotov, Ivan; Lupton, Robert; Nomerotski, Andrei; O'Connor, Paul; Stubbs, Christopher; Tyson, Anthony; Walter, Christopher
2016-08-01
We employ electrostatic conversion drift calculations to match CCD pixel signal covariances observed in at field exposures acquired using candidate sensor devices for the LSST Camera.1, 2 We thus constrain pixel geometry distortions present at the end of integration, based on signal images recorded. We use available data from several operational voltage parameter settings to validate our understanding. Our primary goal is to optimize flux point spread function (FPSF) estimation quantitatively, and thereby minimize sensor-induced errors which may limit performance in precision astronomy applications. We consider alternative compensation scenarios that will take maximum advantage of our understanding of this underlying mechanism in data processing pipelines currently under development. To quantitatively capture the pixel response in high-contrast/high dynamic range operational extrema, we propose herein some straightforward laboratory tests that involve altering the time order of source illumination on sensors, within individual test exposures. Hence the word hysteretic in the title of this paper.
First observation of the point spread function of optical transition radiation.
Karataev, Pavel; Aryshev, Alexander; Boogert, Stewart; Howell, David; Terunuma, Nobuhiro; Urakawa, Junji
2011-10-21
We represent the first experimental observation of the point spread function (PSF) of optical transition radiation (OTR) performed at KEK-Accelerator Test Facility extraction line. We have demonstrated that the PSF vertical polarization component has a central minimum with a two lobe distribution. However, the distribution width varied significantly with wavelength. We assume that we observed a severe effect from spherical or chromatic aberrations which are not taken into account in any existing theoretical model. We believe that the result of this work will encourage theoreticians to continue developing the theory as it is important for various transition radiation applications. Nonuniform distribution of the OTR PSF creates an opportunity for developing a submicrometer transverse beam size monitor.
Spatioangular Prefiltering for Multiview 3D Displays.
Ramachandra, Vikas; Hirakawa, Keigo; Zwicker, Matthias; Nguyen, Truong
2011-05-01
In this paper, we analyze the reproduction of light fields on multiview 3D displays. A three-way interaction between the input light field signal (which is often aliased), the joint spatioangular sampling grids of multiview 3D displays, and the interview light leakage in modern multiview 3D displays is characterized in the joint spatioangular frequency domain. Reconstruction of light fields by all physical 3D displays is prone to light leakage, which means that the reconstruction low-pass filter implemented by the display is too broad in the angular domain. As a result, 3D displays excessively attenuate angular frequencies. Our analysis shows that this reduces sharpness of the images shown in the 3D displays. In this paper, stereoscopic image recovery is recast as a problem of joint spatioangular signal reconstruction. The combination of the 3D display point spread function and human visual system provides the narrow-band low-pass filter which removes spectral replicas in the reconstructed light field on the multiview display. The nonideality of this filter is corrected with the proposed prefiltering. The proposed light field reconstruction method performs light field antialiasing as well as angular sharpening to compensate for the nonideal response of the 3D display. The union of cosets approach which has been used earlier by others is employed here to model the nonrectangular spatioangular sampling grids on a multiview display in a generic fashion. We confirm the effectiveness of our approach in simulation and in physical hardware, and demonstrate improvement over existing techniques.
Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions
Marois, C; Lafreniere, D; Macintosh, B; Doyon, R
2006-02-07
For ground-based adaptive optics point source imaging, differential atmospheric refraction and flexure introduce a small drift of the point spread function (PSF) with time, and seeing and sky transmission variations modify the PSF flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected companions as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagraphy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
NASA Astrophysics Data System (ADS)
Dmochowski, Jacek P.; Bikson, Marom; Parra, Lucas C.
2012-10-01
Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode-cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown.
NASA Astrophysics Data System (ADS)
Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.
2010-06-01
Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of
NASA Astrophysics Data System (ADS)
Lan, Fei; Jiang, Minlin; Tao, Quan; Wei, Fanan; Li, Guangyong
2017-03-01
A Kelvin probe force microscopy (KPFM) image is sometimes difficult to interpret because it is a blurred representation of the true surface potential (SP) distribution of the materials under test. The reason for the blurring is that KPFM relies on the detection of electrostatic force, which is a long-range force compared to other surface forces. Usually, KPFM imaging model is described as the convolution of the true SP distribution of the sample with an intrinsic point spread function (PSF) of the measurement system. To restore the true SP signals from the blurred ones, the intrinsic PSF of the system is needed. In this work, we present a way to experimentally calibrate the PSF of the KPFM system. Taking the actual probe shape and experimental parameters into consideration, this calibration method leads to a more accurate PSF than the ones obtained from simulations. Moreover, a nonlinear reconstruction algorithm based on total variation (TV) regularization is applied to KPFM measurement to reverse the blurring caused by PSF during KPFM imaging process; as a result, noises are reduced and the fidelity of SP signals is improved.
Quantifying intraocular scatter with near diffraction-limited double-pass point spread function
Zhao, Junlei; Xiao, Fei; Kang, Jian; Zhao, Haoxin; Dai, Yun; Zhang, Yudong
2016-01-01
Measurement of the double-pass (DP) point-spread function (PSF) can provide an objective and non-invasive method for estimating intraocular scatter in the human eye. The objective scatter index (OSI), which is calculated from the DP PSF images, is commonly used to quantify intraocular scatter. In this article, we simulated the effect of higher-order ocular aberrations on OSI, and the results showed that higher-order ocular aberrations had a significant influence on OSI. Then we developed an adaptive optics DP PSF measurement system (AO-DPPMS) which was capable of correcting ocular aberrations up to eighth-order radial Zernike modes over a 6.0-mm pupil. Employing this system, we obtained DP PSF images of four subjects at the fovea. OSI values with aberrations corrected up to 2nd, 5th and 8th Zernike order were calculated respectively, from the DP PSF images of the four subjects. The experimental results were consistent with the simulation, suggesting that it is necessary to compensate for the higher-order ocular aberrations for accurate intraocular scatter estimation. PMID:27895998
Centroids computation and point spread function analysis for reverse Hartmann test
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Linqqin; Zhao, Yuejin
2017-03-01
This paper studies the point spread function (PSF) and centroids computation methods to improve the performance of reverse Hartmann test (RHT) in poor conditions, such as defocus, background noise, etc. In the RHT, we evaluate the PSF in terms of Lommel function and classify it as circle of confusion (CoC) instead of Airy disk. Approximation of a CoC spot with Gaussian or super-Gaussian profile to identify its centroid forms the basis of centroids algorithm. It is also effective for fringe pattern while the segmental fringe is served as a 'spot' with an infinite diameter in one direction. RHT experiments are conducted to test the fitting effects and centroiding performances of the methods with Gaussian and super-Gaussian approximations. The fitting results show that the super-Gaussian obtains more reasonable fitting effects. The super-Gauss orders are only slightly larger than 2 means that the CoC has a similar profile with Airy disk in certain conditions. The results of centroids computation demonstrate that when the signal to noise ratio (SNR) is falling, the centroid computed by super-Gaussian method has a less shift and the shift grows at a slower pace. It implies that the super-Gaussian has a better anti-noise capability in centroid computation.
NASA Astrophysics Data System (ADS)
Nakahara, Hisashi; Haney, Matthew M.
2015-07-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artefacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green's functions. In particular, the PSF can be related to Green's function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
Point spread function computation in normal incidence for rough optical surfaces
NASA Astrophysics Data System (ADS)
Tayabaly, Kashmira; Spiga, Daniele; Sironi, Giorgia; Canestrari, Rodolfo; Lavagna, Michele; Pareschi, Giovanni
2016-08-01
The Point Spread Function (PSF) allows for specifying the angular resolution of optical systems which is a key parameter used to define the performances of most optics. A prediction of the system's PSF is therefore a powerful tool to assess the design and manufacture requirements of complex optical systems. Currently, well-established ray-tracing routines based on a geometrical optics are used for this purpose. However, those ray-tracing routines either lack real surface defect considerations (figure errors or micro-roughness) in their computation, or they include a scattering effect modeled separately that requires assumptions difficult to verify. Since there is an increasing demand for tighter angular resolution, the problem of surface finishing could drastically damage the optical performances of a system, including optical telescopes systems. A purely physical optics approach is more effective as it remains valid regardless of the shape and size of the defects appearing on the optical surface. However, a computation when performed in the two-dimensional space is time consuming since it requires processing a surface map with a few micron resolution which sometimes extends the propagation to multiple-reflections. The computation is significantly simplified in the far-field configuration as it involves only a sequence of Fourier Transforms. We show how to account for measured surface defects and roughness in order to predict the performances of the optics in single reflection, which can be applied and validated for real case studies.
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2016-09-01
The Point Spread Function (PSF) indirectly encodes the wavefront aberrations of an optical system and therefore is a metric of the system performance. Analysis of the PSF properties is useful in the case of diffractive optics where the wavefront emerging from the exit pupil is not necessarily continuous and consequently not well represented by traditional wavefront error descriptors such as Zernike polynomials. The discontinuities in the wavefront from diffractive optics occur in cases where step heights in the element are not multiples of the illumination wavelength. Examples include binary or N-step structures, multifocal elements where two or more foci are intentionally created or cases where other wavelengths besides the design wavelength are used. Here, a technique for expanding the electric field amplitude of the PSF into a series of orthogonal functions is explored. The expansion coefficients provide insight into the diffraction efficiency and aberration content of diffractive optical elements. Furthermore, this technique is more broadly applicable to elements with a finite number of diffractive zones, as well as decentered patterns.
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
Light Scattered from Polished Optical Surfaces: Wings of the Point Spread Function
NASA Technical Reports Server (NTRS)
Kenknight, C. E.
1984-01-01
Random figure errors from the polishing process plus particles on the main mirrors in a telescope cause an extended point spread function (PSF) declining approximately as the inverse square of the sine of the angle from a star from about 100 micro-rad to a right angle. The decline in at least one case, and probably in general, proceeds as the inverse cube at smaller angles where the usual focal plane aperture radius is chosen. The photometric error due to misalignment by one Airy ring spacing with an aperture of n rings depends on the net variance in the figure. It is approximately 60/(n+1)(3) when using the data of Kormendy (1973). A typical value is 6 x 10 to the -5th power per ring of misalignment with n = 100 rings. The encircled power may be modulated on a time scale of hours by parts per thousand in a wavelength dependent manner due to relative humidity effects on mirror dust. The scattering according to an inverse power law is due to a random walk in aberration height caused by a multitude of facets and slope errors left by the polishing process. A deviation from such a law at grazing emergence may permit monitoring the dust effects.
Point spread function modeling and image restoration for cone-beam CT
NASA Astrophysics Data System (ADS)
Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe
2015-03-01
X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)
Characterizing the point spread function of retinal OCT devices with a model eye-based phantom.
Agrawal, Anant; Connors, Megan; Beylin, Alexander; Liang, Chia-Pin; Barton, David; Chen, Yu; Drezek, Rebekah A; Pfefer, T Joshua
2012-05-01
We have designed, fabricated, and tested a nanoparticle-embedded phantom (NEP) incorporated into a model eye in order to characterize the point spread function (PSF) of retinal optical coherence tomography (OCT) devices in three dimensions under realistic imaging conditions. The NEP comprises a sparse distribution of highly backscattering silica-gold nanoshells embedded in a transparent UV-curing epoxy. The commercially-available model eye replicates the key optical structures and focusing power of the human eye. We imaged the model eye-NEP combination with a research-grade spectral domain OCT system designed for in vivo retinal imaging and quantified the lateral and axial PSF dimensions across the field of view in the OCT images. We also imaged the model eye-NEP in a clinical OCT system. Subtle features in the PSF and its dimensions were consistent with independent measurements of lateral and axial resolution. This model eye-based phantom can provide retinal OCT device developers and users a means to rapidly, objectively, and consistently assess the PSF, a fundamental imaging performance metric.
Scale-space point spread function based framework to boost infrared target detection algorithms
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2016-07-01
Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.
NASA Astrophysics Data System (ADS)
Benameur, S.; Mignotte, M.; Lavoie, F.
2012-03-01
In modern ultrasound imaging systems, the spatial resolution is severely limited due to the effects of both the finite aperture and overall bandwidth of ultrasound transducers and the non-negligible width of the transmitted ultrasound beams. This low spatial resolution remains the major limiting factor in the clinical usefulness of medical ultrasound images. In order to recover clinically important image details, which are often masked due to this resolution limitation, an image restoration procedure should be applied. To this end, an estimation of the Point Spread Function (PSF) of the ultrasound imaging system is required. This paper introduces a novel, original, reliable, and fast Maximum Likelihood (ML) approach for recovering the PSF of an ultrasound imaging system. This new PSF estimation method assumes as a constraint that the PSF is of known parametric form. Under this constraint, the parameter values of its associated Modulation Transfer Function (MTF) are then efficiently estimated using a homomorphic filter, a denoising step, and an expectation-maximization (EM) based clustering algorithm. Given this PSF estimate, a deconvolution can then be efficiently used in order to improve the spatial resolution of an ultrasound image and to obtain an estimate (independent of the properties of the imaging system) of the true tissue reflectivity function. The experiments reported in this paper demonstrate the efficiency and illustrate all the potential of this new estimation and blind deconvolution approach.
Point spread function and optical transfer function of a misaligned hypertelescope
NASA Astrophysics Data System (ADS)
Tcherniavski, Iouri
2011-03-01
Analytical expressions for an instantaneous spectral point spread function (PSF) and an instantaneous spectral optical transfer function of a misaligned hypertelescope without delay lines, and containing circular and/or annular collecting mirrors, are presented. The formulas are deduced on the basis of the Fresnel approach to the Kirchhoff diffraction theory. Numerical results obtained for the system containing 60 identical annular mirrors, illustrate the pupil densification properties of the hypertelescope and the influence of random spatial and angular positioning errors of the optical elements on the average PSF (APSF), and on the average modulation transfer function (AMTF) calculated for the errors obeying specified probability distributions. The APSF and the AMTF give the opportunity to estimate the influence of the alignment errors on the resulting image quality in the hypertelescope image plane. This estimation can be used to produce the necessary requirements to the accuracy of the mirror control system and to the accuracy of the optical element alignment in dependence on the values of the exit pupil densification and the desirable resolution capability of the hypertelescope. The notion of the PSF and the modulation transfer function for off-axis beams can be useful for evaluation of a hypertelescope field of view.
NASA Astrophysics Data System (ADS)
Patru, F.; Tarmoul, N.; Mourard, D.; Lardière, O.
2009-06-01
In the future, optical stellar interferometers will provide true images thanks to larger number of telescopes and to advanced cophasing subsystems. These conditions are required to have sufficient resolution elements (resel) in the image and to provide direct images in the hypertelescope mode. It has already been shown that hypertelescopes provide snapshot images with a significant gain in sensitivity without inducing any loss of the useful field of view for direct imaging applications. This paper aims at studying the properties of the point spread functions of future large arrays using the hypertelescope mode. Numerical simulations have been performed and criteria have been defined to study the image properties. It is shown that the choice of the configuration of the array is a trade-off between the resolution, the halo level and the field of view. A regular pattern of the array of telescopes optimizes the image quality (low halo level and maximum encircled energy in the central peak), but decreases the useful field of view. Moreover, a non-redundant array is less sensitive to the space aliasing effect than a redundant array.
Light scattered from polished optical surfaces: Wings of the point spread function
NASA Astrophysics Data System (ADS)
Kenknight, C. E.
1984-11-01
Random figure errors from the polishing process plus particles on the main mirrors in a telescope cause an extended point spread function (PSF) declining approximately as the inverse square of the sine of the angle from a star from about 100 micro-rad to a right angle. The decline in at least one case, and probably in general, proceeds as the inverse cube at smaller angles where the usual focal plane aperture radius is chosen. The photometric error due to misalignment by one Airy ring spacing with an aperture of n rings depends on the net variance in the figure. It is approximately 60/(n+1)(3) when using the data of Kormendy (1973). A typical value is 6 x 10 to the -5th power per ring of misalignment with n = 100 rings. The encircled power may be modulated on a time scale of hours by parts per thousand in a wavelength dependent manner due to relative humidity effects on mirror dust. The scattering according to an inverse power law is due to a random walk in aberration height caused by a multitude of facets and slope errors left by the polishing process. A deviation from such a law at grazing emergence may permit monitoring the dust effects.
NASA Astrophysics Data System (ADS)
Martin, Olivier A.; Correia, Carlos M.; Gendron, Eric; Rousset, Gerard; Gratadour, Damien; Vidal, Fabrice; Morris, Tim J.; Basden, Alastair G.; Myers, Richard M.; Neichel, Benoit; Fusco, Thierry
2016-10-01
In preparation of future multiobject spectrographs (MOS) whose one of the major role is to provide an extensive statistical studies of high redshifted galaxies surveyed, the demonstrator CANARY has been designed to tackle technical challenges related to open-loop adaptive optics (AO) control with jointed Natural Guide Star and Laser Guide Star tomography. We have developed a point spread function (PSF) reconstruction algorithm dedicated to multiobject adaptive optics systems using system telemetry to estimate the PSF potentially anywhere in the observed field, a prerequisite to postprocess AO-corrected observations in integral field spectroscopy. We show how to handle off-axis data to estimate the PSF using atmospheric tomography and compare it to a classical approach that uses on-axis residual phase from a truth sensor observing a natural bright source. We have reconstructed over 450 on-sky CANARY PSFs and we get bias/1-σ standard-deviation (std) of 1.3/4.8 on the H-band Strehl ratio (SR) with 92.3% of correlation between reconstructed and sky SR. On the full-width at half-maximum, we get, respectively, 2.94 mas, 19.9 mas, and 88.3% for the bias, std, and correlation. The reference method achieves 0.4/3.5/95% on the SR and 2.71 mas/14.9 mas/92.5% on the FWHM for the bias/std/correlation.
Multipath exploitation in through-wall radar imaging via point spread functions.
Setlur, Pawan; Alli, Giovanni; Nuzzo, Luigia
2013-12-01
Due to several sources of multipath in through-wall radar sensing, such as walls, floors, and ceilings, there could exist multipath ghosts associated with a few genuine targets in the synthetic aperture beamformed image. The multipath ghosts are false positives and therefore confusable with genuine targets. Here, we develop a multipath exploitation technique using point spread functions, which associate and map back the multipath ghosts to their genuine targets, thereby increasing the effective signal-to-clutter ratio (SCR) at the genuine target locations. To do so, we first develop a multipath model advocating the Householder transformation, which permits modeling multiple reflections at multiple walls, and also allows for unconventional room/building geometries. Second, closed-form solutions of the multipath ghost locations assuming free space propagation are derived. Third, a nonlinear least squares optimization is formulated and initialized with these free space solutions to localize the multipath ghosts in through-wall radar sensing. The exploitation approach is general and does not require a priori assumptions on the number of targets. The free space multipath ghost locations and exploitation technique derived here may be used as is for multipath exploitation in urban canyons via synthetic aperture radar. Analytical expressions quantifying the SCR gain after multipath exploitation are derived. The analysis is validated with experimental EM results using finite-difference time-domain simulations.
Merwa, Robert; Scharfetter, Hermann
2007-07-01
Magnetic induction tomography (MIT) is a low-resolution imaging modality used for reconstructing the changes of the passive electrical properties in a target object. For an imaging system, it is very important to give forecasts about the image quality. Both the maximum resolution and the correctness of the location of the inhomogeneities are of major interest. Furthermore, the smallest object which can be detected for a certain noise level is a criterion for the diagnostic value of an image. The properties of an MIT image are dependent on the position inside the object, the conductivity distribution and of course on the location and the number of excitation coils and receiving coils. Quantitative statements cannot be made in general but it is feasible to predict the image quality for a selected problem. For electrical impedance tomography (EIT), the theoretical limits of image quality have been studied carefully and a comprehensive analysis for MIT is necessary. Thus, a simplified analysis on resolution, dimensions and location of an inhomogeneity was carried out by means of an evaluation of the point spread function (PSF). In analogy to EIT the PSF depends strongly on the location, showing the broadest distribution in the centre of the object. Increasing the amount of regularization according to increasing measurement noise, the PSF broadens and its centre is shifted towards the borders of the object. The resolution is indirectly proportional to the width of the PSF and increases when moving from the centre towards the border of the object and decreases with increasing noise.
NASA Astrophysics Data System (ADS)
Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.
2002-12-01
Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated
Correction for collimator-detector response in SPECT using point spread function template.
Chun, Se Young; Fessler, Jeffrey A; Dewaraja, Yuni K
2013-02-01
Compensating for the collimator-detector response (CDR) in SPECT is important for accurate quantification. The CDR consists of both a geometric response and a septal penetration and collimator scatter response. The geometric response can be modeled analytically and is often used for modeling the whole CDR if the geometric response dominates. However, for radionuclides that emit medium or high-energy photons such as I-131, the septal penetration and collimator scatter response is significant and its modeling in the CDR correction is important for accurate quantification. There are two main methods for modeling the depth-dependent CDR so as to include both the geometric response and the septal penetration and collimator scatter response. One is to fit a Gaussian plus exponential function that is rotationally invariant to the measured point source response at several source-detector distances. However, a rotationally-invariant exponential function cannot represent the star-shaped septal penetration tails in detail. Another is to perform Monte-Carlo (MC) simulations to generate the depth-dependent point spread functions (PSFs) for all necessary distances. However, MC simulations, which require careful modeling of the SPECT detector components, can be challenging and accurate results may not be available for all of the different SPECT scanners in clinics. In this paper, we propose an alternative approach to CDR modeling. We use a Gaussian function plus a 2-D B-spline PSF template and fit the model to measurements of an I-131 point source at several distances. The proposed PSF-template-based approach is nearly non-parametric, captures the characteristics of the septal penetration tails, and minimizes the difference between the fitted and measured CDR at the distances of interest. The new model is applied to I-131 SPECT reconstructions of experimental phantom measurements, a patient study, and a MC patient simulation study employing the XCAT phantom. The proposed model
NASA Technical Reports Server (NTRS)
Duggin, M. J.; Schoch, L. B.
1984-01-01
The point-spread function is an important factor in determining the nature of feature types on the basis of multispectral recorded radiance, particularly from heterogeneous scenes and particularly from scenes which are imaged repetitively, in order to provide thematic characterization by means of multitemporal signature. To demonstrate the effect of the interaction of scene heterogeneity with the point spread function (PSF)1, a template was constructed from the line spread function (LSF) data for the thematic mapper photoflight model. The template was in 0.25 (nominal) pixel increments in the scan line direction across three scenes of different heterogeneity. The sensor output was calculated by considering the calculated scene radiance from each scene element occurring between the contours of the PSF template, plotted on a movable mylar sheet while it was located at a given position.
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
POINT-SPREAD FUNCTIONS FOR THE EXTREME-ULTRAVIOLET CHANNELS OF SDO/AIA TELESCOPES
Poduval, B.; DeForest, C. E.; Schmelz, J. T.; Pathak, S.
2013-03-10
We present the stray-light point-spread functions (PSFs) and their inverses we characterized for the Atmospheric Imaging Assembly (AIA) EUV telescopes on board the Solar Dynamics Observatory (SDO) spacecraft. The inverse kernels are approximate inverses under convolution. Convolving the original Level 1 images with them produces images with improved stray-light characteristics. We demonstrate the usefulness of these PSFs by applying them to two specific cases: photometry and differential emission measure (DEM) analysis. The PSFs consist of a narrow Gaussian core, a diffraction component, and a diffuse component represented by the sum of a Gaussian-truncated Lorentzian and a shoulder Gaussian. We determined the diffraction term using the measured geometry of the diffraction pattern identified in flare images and the theoretically computed intensities of the principal maxima of the first few diffraction orders. To determine the diffuse component, we fitted its parameterized model using iterative forward-modeling of the lunar interior in the SDO/AIA images from the 2011 March 4 lunar transit. We find that deconvolution significantly improves the contrast in dark features such as miniature coronal holes, though the effect was marginal in bright features. On a percentage-scattering basis, the PSFs for SDO/AIA are better by a factor of two than that of the EUV telescope on board the Transition Region And Coronal Explorer mission. A preliminary analysis suggests that deconvolution alone does not affect DEM analysis of small coronal loop segments with suitable background subtraction. We include the derived PSFs and their inverses as supplementary digital materials.
Neural network simulation of the atmospheric point spread function for the adjacency effect research
NASA Astrophysics Data System (ADS)
Ma, Xiaoshan; Wang, Haidong; Li, Ligang; Yang, Zhen; Meng, Xin
2016-10-01
Adjacency effect could be regarded as the convolution of the atmospheric point spread function (PSF) and the surface leaving radiance. Monte Carlo is a common method to simulate the atmospheric PSF. But it can't obtain analytic expression and the meaningful results can be only acquired by statistical analysis of millions of data. A backward Monte Carlo algorithm was employed to simulate photon emitting and propagating in the atmosphere under different conditions. The PSF was determined by recording the photon-receiving numbers in fixed bin at different position. A multilayer feed-forward neural network with a single hidden layer was designed to learn the relationship between the PSF's and the input condition parameters. The neural network used the back-propagation learning rule for training. Its input parameters involved atmosphere condition, spectrum range, observing geometry. The outputs of the network were photon-receiving numbers in the corresponding bin. Because the output units were too many to be allowed by neural network, the large network was divided into a collection of smaller ones. These small networks could be ran simultaneously on many workstations and/or PCs to speed up the training. It is important to note that the simulated PSF's by Monte Carlo technique in non-nadir viewing angles are more complicated than that in nadir conditions which brings difficulties in the design of the neural network. The results obtained show that the neural network approach could be very useful to compute the atmospheric PSF based on the simulated data generated by Monte Carlo method.
X-ray optical systems: from metrology to Point Spread Function
NASA Astrophysics Data System (ADS)
Spiga, Daniele; Raimondi, Lorenzo
2014-09-01
One of the problems often encountered in X-ray mirror manufacturing is setting proper manufacturing tolerances to guarantee an angular resolution - often expressed in terms of Point Spread Function (PSF) - as needed by the specific science goal. To do this, we need an accurate metrological apparatus, covering a very broad range of spatial frequencies, and an affordable method to compute the PSF from the metrology dataset. In the past years, a wealth of methods, based on either geometrical optics or the perturbation theory in smooth surface limit, have been proposed to respectively treat long-period profile errors or high-frequency surface roughness. However, the separation between these spectral ranges is difficult do define exactly, and it is also unclear how to affordably combine the PSFs, computed with different methods in different spectral ranges, into a PSF expectation at a given X-ray energy. For this reason, we have proposed a method entirely based on the Huygens-Fresnel principle to compute the diffracted field of real Wolter-I optics, including measured defects over a wide range of spatial frequencies. Owing to the shallow angles at play, the computation can be simplified limiting the computation to the longitudinal profiles, neglecting completely the effect of roundness errors. Other authors had already proposed similar approaches in the past, but only in far-field approximation, therefore they could not be applied to the case of Wolter-I optics, in which two reflections occur in sequence within a short range. The method we suggest is versatile, as it can be applied to multiple reflection systems, at any X-ray energy, and regardless of the nominal shape of the mirrors in the optical system. The method has been implemented in the WISE code, successfully used to explain the measured PSFs of multilayer-coated optics for astronomic use, and of a K-B optical system in use at the FERMI free electron laser.
Backlund, Mikael P.; Joyner, Ryan; Weis, Karsten; Moerner, W. E.
2014-01-01
Single-particle tracking has been applied to study chromatin motion in live cells, revealing a wealth of dynamical behavior of the genomic material once believed to be relatively static throughout most of the cell cycle. Here we used the dual-color three-dimensional (3D) double-helix point spread function microscope to study the correlations of movement between two fluorescently labeled gene loci on either the same or different budding yeast chromosomes. We performed fast (10 Hz) 3D tracking of the two copies of the GAL locus in diploid cells in both activating and repressive conditions. As controls, we tracked pairs of loci along the same chromosome at various separations, as well as transcriptionally orthogonal genes on different chromosomes. We found that under repressive conditions, the GAL loci exhibited significantly higher velocity cross-correlations than they did under activating conditions. This relative increase has potentially important biological implications, as it might suggest coupling via shared silencing factors or association with decoupled machinery upon activation. We also found that on the time scale studied (∼0.1–30 s), the loci moved with significantly higher subdiffusive mean square displacement exponents than previously reported, which has implications for the application of polymer theory to chromatin motion in eukaryotes. PMID:25318676
NASA Astrophysics Data System (ADS)
Walsh, J. R.
2004-02-01
The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly
The MATPHOT Algorithm for Digital Point Spread Function CCD Stellar Photometry
NASA Astrophysics Data System (ADS)
Mighell, Kenneth J.
Most CCD stellar photometric reduction packages use analytical functions to represent the stellar Point Spread Function (PSF). These PSF-fitting programs generally compute all the major partial derivatives of the observational model by differentiating the volume integral of the PSF over a pixel. Real-world PSFs are frequently very complicated and may not be exactly representable with any combination of analytical functions. Deviations of the real-world PSF from the analytical PSF are then generally stored in a residual matrix. Diffraction rings and spikes can provide a great deal of information about the position of a star, yet information about such common observational effects generally resides only in the residual matrix. Such useful information is generally not used in the PSF-fitting process except for the final step involving the determination of the chi-square goodness-of-fit between the CCD observation and the model where the intensity-scaled residual matrix is added to the mathematical model of the observation just before the goodness-of-fit is computed. I describe some of the key features of my MATPHOT algorithm for digital PSF-fitting CCD stellar photometry where the PSF is represented by a matrix of numbers. The mathematics of determining the partial derivatives of the observational model with respect to the x and y direction vectors is exactly the same with analytical or digital PSFs. The implementation methodology, however, is quite different. In the case of digital PSFs, the partial derivatives can be determined using numerical differentiation techniques on the digital PSFs. I compare the advantages and disadvantages with respect to traditional PSF-fitting algorithms based on analytical representations of the PSF. The MATPHOT algorithm is an ideal candidate for parallel processing. Instead of operating in the traditional single-processor mode of analyzing one pixel at a time, the MATPHOT algorithm can be written to operate on an image-plane basis
Angelie, E; Sappey-Marinier, D; Mallet, J; Bonmartin, A; Sau, J
2000-06-01
Magnetic resonance spectroscopic imaging is limited by a low signal-to-noise ratio, so a compromise between spatial resolution and examination time is needed in clinical application. The reconstruction of truncated signal introduces a Point Spread Function that considerably affects the spatial resolution. In order to reduce spatial contamination, three methods, applied after Fourier transform image reconstruction, based on deconvolution or iterative techniques are tested to decrease Point Spread Function contamination. A Gauss-Seidel (GS) algorithm is used for iterative techniques with and without a non-negative constraint (GS+). Convergence and noise dependence studies of the GS algorithm have been done. The linear property of contamination was validated on a point sample phantom. A significant decrease of contamination without broadening the spatial resolution was obtained with GS+ method compared to a conventional apodization. This post-processing method can provide a contrast enhancement of clinical spectroscopic images without changes in acquisition time.
NASA Technical Reports Server (NTRS)
Salomonson, V. V.; Nickeson, J. E.; Bodechtel, J.; Zilger, J.
1988-01-01
Point-spread functions (PSF) comparisons were made between the Modular Optoelectronic Multispectral Scanner (MOMS-01), the LANDSAT Thematic Mapper (TM) and the SPOT-HRV instruments, principally near Lake Nakuru, Kenya. The results, expressed in terms of the width of the point spread functions at the 50 percent power points as determined from the in-scene analysis show that the TM has a PSF equal to or narrower than the MOMS-01 instrument (50 to 55 for the TM versus 50 to 68 for the MOMS). The SPOT estimates of the PSF range from 36 to 40. When the MOMS results are adjusted for differences in edge scanning as compared to the TM and SPOT, they are nearer 40 in the 575 to 625 nm band.
NASA Astrophysics Data System (ADS)
Belmonte, Aniceto M.; Comeron, Adolfo; Bara, Javier; Rubio, Juan A.; Fernandez, Estela; Menendez-Valdes, Pedro
1995-06-01
In planned intersatellite optical communication systems, the optical payload on board the geostationary satellite will be periodically pointed towards an Optical Ground Station. When the satellite-ground link is established, the turbulence-induced disturbances must be taken into account. The subject of this paper is to assess the statistics for the power fadings that result from the instantaneous point-spread function distortion. Results predicted by an approximate technique which considers the instantaneous point-spread function as a gaussian intensity distribution displaced from the focus due to the angle-of-arrival tilt are compared against results obtained from wavefront simulations produced by fractal generation techniques. The reduction in the cumulative probability of losses that can be obtained by spatial averaging using a multiaperture receiver is also assessed.
NASA Astrophysics Data System (ADS)
Belmonte, Aniceto M.; Comeron, Adolfo; Bara, Javier; Rubio, Juan A.; Fernandez, Estela; Menendez-Valdes, Pedro
1995-04-01
In planned intersatellite optical communication systems, the optical payload on board the geostationary satellite will be periodically pointed towards an Optical Ground Station. When the satellite-ground link is established, the turbulence-induced disturbances must be taken into account. The subject of this paper is to assess the statistics for the power fadings that result from the instantaneous point-spread function distortion. Results predicted by an approximate technique which considers the instantaneous point-spread function as a gaussian intensity distribution displaced from the focus due to the angle-of-arrival tilt are compared against results obtained from wavefront simulations produced by fractal generation techniques. The reduction in the cumulative probability of losses that can be obtained by spatial averaging using a multiaperture receiver is also assessed.
NASA Astrophysics Data System (ADS)
Clénet, Y.; Gendron, E.; Gratadour, D.; Rousset, G.; Vidal, F.
2015-11-01
Context. The science case studies and the optimized designing of the future adaptive optics-fed extremely large telescope instruments require the precise simulation of their adaptive optics system, potentially over their whole field of view, whatever the adaptive optics flavor the instruments will be equipped with. Aims: We simulate the anisoplanatism effect on the extremely large telescope single conjugate adaptive optics point spread function. Our interest in this expected degradation of the correction performance with respect to the off-axis distance is in terms of the point spread function Strehl ratio and profile. Methods: Adaptive optics simulations at the scale of extremely large telescopes are challenging given the large parameter space to explore for the adaptive optics dimensioning and the large number of degrees of freedom at play for a given set of simulation parameters. To address this problem, we have used three different simulation tools with increasing degree of fidelity compared to a real adaptive optics system. The first is based on analytical formulae and allowed us to derive the Strehl ratio degradation with the off-axis distance. The second is a Fourier-based code and provided us with both the Strehl ratio and the point spread function profile in the field. The last is an end-to-end code based on the graphical processing unit technology and also provided us with the Strehl ratio and the point spread function profile in the field. Results: The three tools we used demonstrated a fast execution time even at the extremely large telescope scale. Cross-checks between the different codes were performed and demonstrated the coherency of the results. In addition to the expected degradation of the adaptive optics performance with the field angle, we demonstrated that in the simulation conditions applicable to the E-ELT, the single conjugate adaptive optics point spread function remains topped with a coherent core even at off-axis distances as large as 60
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.
1990-01-01
PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.
Pogosyan, Dmitry; Gay, Christophe; Pichon, Christophe
2009-10-15
The full moments expansion of the joint probability distribution of an isotropic random field, its gradient, and invariants of the Hessian are presented in 2 and 3D. It allows for explicit expression for the Euler characteristic in ND and computation of extrema counts as functions of the excursion set threshold and the spectral parameter, as illustrated on model examples.
PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
NASA Astrophysics Data System (ADS)
Pletinckx, D.
2011-09-01
The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.
Lupini, Andrew R; de Jonge, Niels
2011-10-01
Aberration correction reduces the depth of field in scanning transmission electron microscopy (STEM) and thus allows three-dimensional (3D) imaging by depth sectioning. This imaging mode offers the potential for sub-Ångstrom lateral resolution and nanometer-scale depth sensitivity. For biological samples, which may be many microns across and where high lateral resolution may not always be needed, optimizing the depth resolution even at the expense of lateral resolution may be desired, aiming to image through thick specimens. Although there has been extensive work examining and optimizing the probe formation in two dimensions, there is less known about the probe shape along the optical axis. Here the probe shape is examined in three dimensions in an attempt to better understand the depth resolution in this mode. Examples are presented of how aberrations change the probe shape in three dimensions, and it is found that off-axial aberrations may need to be considered for focal series of large areas. It is shown that oversized or annular apertures theoretically improve the vertical resolution for 3D imaging of nanoparticles. When imaging nanoparticles of several nanometer size, regular STEM can thereby be optimized such that the vertical full-width at half-maximum approaches that of the aberration-corrected STEM with a standard aperture.
3D multifocus astigmatism and compressed sensing (3D MACS) based superresolution reconstruction
Huang, Jiaqing; Sun, Mingzhai; Gumpper, Kristyn; Chi, Yuejie; Ma, Jianjie
2015-01-01
Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds. PMID:25798314
Martin, Michael K; Helm, Julie; Patyk, Kelly A
2015-06-15
We describe a method for de-identifying point location data used for disease spread modeling to allow data custodians to share data with modeling experts without disclosing individual farm identities. The approach is implemented in an open-source software program that is described and evaluated here. The program allows a data custodian to select a level of de-identification based on the K-anonymity statistic. The program converts a file of true farm locations and attributes into a file appropriate for use in disease spread modeling with the locations randomly modified to prevent re-identification based on location. Important epidemiological relationships such as clustering are preserved to as much as possible to allow modeling similar to those using true identifiable data. The software implementation was verified by visual inspection and basic descriptive spatial analysis of the output. Performance is sufficient to allow de-identification of even large data sets on desktop computers available to any data custodian.
Identifying the starting point of a spreading process in complex networks.
Comin, Cesar Henrique; Costa, Luciano da Fontoura
2011-11-01
When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.
Duan, Xinhui; Wang, Jia; Qu, Mingliang; Leng, Shuai; Liu, Yu; Krambeck, Amy; McCollough, Cynthia
2014-01-01
Purpose We propose a method to improve the accuracy of volume estimation of kidney stones from computerized tomography images. Materials and Methods The proposed method consisted of 2 steps. A threshold equal to the average of the computerized tomography number of the object and the background was first applied to determine full width at half maximum volume. Correction factors were then applied, which were precalculated based on a model of a sphere and a 3-dimensional Gaussian point spread function. The point spread function was measured in a computerized tomography scanner to represent the response of the scanner to a point-like object. Method accuracy was validated using 6 small cylindrical phantoms with 2 volumes of 21.87 and 99.9 mm3, and 3 attenuations, respectively, and 76 kidney stones with a volume range of 6.3 to 317.4 mm3. Volumes estimated by the proposed method were compared with full width at half maximum volumes. Results The proposed method was significantly more accurate than full width at half maximum volume (p <0.0001). The magnitude of improvement depended on stone volume with smaller stones benefiting more from the method. For kidney stones 10 to 20 mm3 in volume the average improvement in accuracy was the greatest at 19.6%. Conclusions The proposed method achieved significantly improved accuracy compared with threshold methods. This may lead to more accurate stone management. PMID:22819107
Nonparaxial Fourier propagation tool for aberration analysis and point spread function calculation
NASA Astrophysics Data System (ADS)
Cain, Stephen C.; Watts, Tatsuki
2016-08-01
This paper describes a Fourier propagator for computing the impulse response of an optical system, while including terms ignored in Fresnel and Fraunhofer calculations. The propagator includes a Rayleigh-Sommerfeld diffraction formula calculation from a distant point through the optical system to its image point predicted by geometric optics. The propagator then approximates the neighboring field points via the traditional binomial approximation of the Taylor series expansion around that field point. This technique results in a propagator that combines the speed of a Fourier transform operation with the accuracy of the Rayleigh-Sommerfeld diffraction formula calculation and extends Fourier optics to cases that are nonparaxial. The proposed propagator facilitates direct calculation of aberration coefficients, making it more versatile than the angular spectrum propagator. Bounds on the phase error introduced by the approximations are derived, which show that it should be more widely applicable than the Fresnel propagator. Guidance on how to sample the pupil and detector planes of a simulated imaging system is provided. This report concludes by showing examples of diffraction calculations for a laboratory setup and comparing them to measured diffraction patterns to demonstrate the utility of the propagator.
NASA Technical Reports Server (NTRS)
Tschunko, H. F. A.
1983-01-01
Reference is made to a study by Tschunko (1979) in which it was discussed how apodization modifies the modulation transfer function for various central obstruction ratios. It is shown here how apodization, together with the central obstruction ratio, modifies the point spread function, which is the basic element for the comparison of imaging performance and for the derivation of energy integrals and other functions. At high apodization levels and lower central obstruction (less than 0.1), new extended radial zones are formed in the outer part of the central ring groups. These transmutation of the image functions are of more than theoretical interest, especially if the irradiance levels in the outer ring zones are to be compared to the background irradiance levels. Attention is then given to the energy distribution in point images generated by annular apertures apodized by various transmission functions. The total energy functions are derived; partial energy integrals are determined; and background irradiance functions are discussed.
NASA Astrophysics Data System (ADS)
Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther
2007-09-01
Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.
ERIC Educational Resources Information Center
Hastings, S. K.
2002-01-01
Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)
Sapienza, Lucas Gomes; Flosi, Adriana; Aiza, Antonio; de Assis Pellizzon, Antonio Cassio; Chojniak, Rubens; Baiocchi, Glauco
2016-01-01
There is no consensus on the use of computed tomography in vaginal cuff brachytherapy (VCB) planning. The purpose of this study was to prospectively determine the reproducibility of point bladder dose parameters (DICRU and maximum dose), compared with volumetric-based parameters. Twenty-two patients who were treated with high-dose-rate (HDR) VCB underwent simulation by computed tomography (CT-scan) with a Foley catheter at standard tension (position A) and extra tension (position B). CT-scan determined the bladder ICRU dose point in both positions and compared the displacement and recorded dose. Volumetric parameters (D0.1cc, D1.0cc, D2.0cc, D4.0cc and D50%) and point dose parameters were compared. The average spatial shift in ICRU dose point in the vertical, longitudinal and lateral directions was 2.91 mm (range: 0.10–9.00), 12.04 mm (range: 4.50–24.50) and 2.65 mm (range: 0.60–8.80), respectively. The DICRU ratio for positions A and B was 1.64 (p < 0.001). Moreover, a decrease in Dmax was observed (p = 0.016). Tension level of the urinary catheter did not affect the volumetric parameters. Our data suggest that point parameters (DICRU and Dmax) are not reproducible and are not the ideal choice for dose reporting. PMID:27296459
Spatial distribution and spread of sheep biting lice, Bovicola ovis, from point infestations.
James, P J; Moon, R D
1999-03-15
The spatial distribution of chewing lice (Bovicola ovis) on their hosts was examined in Polypay and Columbia ewes initially artificially infested on the midside or the neck. Densities of lice were determined at 69 body sites in eight body regions at approximately monthly intervals for 2 years. In the second year, half of the ewes were mated and lice were counted at 26 body sites on the resulting lambs. Polypay ewes had higher densities of lice than Columbias at most inspections but there was little effect of infestation point or mating on either numbers or the distribution of lice. During periods of high louse numbers densities were generally greatest on the sides or the back. Densities on the head were also high at times and peaked later than overall louse densities. Shearing markedly reduced density but increased the proportion of lice found on the neck, belly and lowleg sites. The distribution of lice on the lambs was similar to that on the ewes except that fewer lice were found on the head. Comparisons of lice per part with the numbers of lice extracted from clipped patches indicated that a sheep with wool bearing area of 1 m2 and a mean count of one louse per 10 cm fleece parting carried approximately 2000 lice. At most times of the year inspections for sheep lice should be concentrated on the sides and back, but in recently shorn sheep greater attention should be paid to the lower neck and ventral regions. Implications of the observed distributions of lice for the efficacy of chemical treatments are discussed.
NASA Astrophysics Data System (ADS)
Li, Shiyou
Magnetic reconnection is a process through which magnetic energy can be converted into kinetic and thermal energy of plasma which is responsible for many dynamic phenomena throughout the universe. Identifying the structure around the point at which the magnetic field lines break and subsequently reform, known as the magnetic null point, is crucial to improving our understanding of reconnection. Here we report the first observation of multiple magnetic nulls structures at the dayside magnetopause boundary and the high latitude cusp region. The topological and dynamic properties of the nulls are revealed by the high-resolution data of the fields, particles and waves. The observation is compared with the recent OpenCCGM simulation results.
NASA Astrophysics Data System (ADS)
Hinojosa-Corona, A.; Nissen, E.; Limon-Tirado, J. F.; Arrowsmith, R.; Krishnan, A.; Saripalli, S.; Oskin, M. E.; Glennie, C. L.; Arregui, S. M.; Fletcher, J. M.; Teran, O. J.
2013-05-01
Aerial LiDAR surveys reconstruct with amazing fidelity the sinuosity of terrain relief. In this research we explore the 3D deformation field at the surface after a big earthquake (M7.2) by comparing pre- to post-event aerial LiDAR point clouds. The April 4 2010 earthquake produced a NW-SE surface rupture ~110km long with right-lateral normal slip up to 3m in magnitude over a very favorable target: scarcely vegetated and unaltered desert mountain range, sierras El Mayor and Cucapah, in northern Baja California, close to the US-México border. It is a plate boundary region between the Pacific and North American plates. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3D surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising translations and rotations) that best aligns the pre- to post-event points. Perturbing the pre- and post-event point clouds independently with a synthetic right lateral inverse displacements of known magnitude along a proposed fault, ICP recovered the synthetically introduced translations. Windows with dimensions of 100-200m gave the best results for datasets with these densities. The simplified surface rupture photo interpreted and mapped in the field, delineates very well the vertical displacements patterns unveiled by ICP. The method revealed block rotations, some with clockwise and others counter clockwise direction along the simplified surface rupture. As ground truth, displacements from ICP have similar values as those measured in the field along the main rupture by Fletcher and collaborators. The vertical component was better estimated than the
NASA Technical Reports Server (NTRS)
Lemen, J. R.; Claflin, E. S.; Brown, W. A.; Bruner, M. E.; Catura, R. C.
1989-01-01
A grazing incidence solar X-ray telescope, Soft X-ray Telescope (SXT), will be flown on the Solar-A satellite in 1991. Measurements have been conducted to determine the focal length, Point Spread Function (PSF), and effective area of the SXT mirror. The measurements were made with pinholes, knife edges, a CCD, and a proportional counter. The results show the 1/r character of the PSF, and indicate a half power diameter of 4.9 arcsec and an effective area of 1.33 sq cm at 13.3 A (0.93 keV). The mirror was found to provide a high contrast image with very little X-ray scattering.
Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D
2006-09-19
Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.
Gilles, Luc; Correia, Carlos; Véran, Jean-Pierre; Wang, Lianqi; Ellerbroek, Brent
2012-11-01
This paper discusses an innovative simulation model based approach for long exposure atmospheric point spread function (PSF) reconstruction in the context of laser guide star (LGS) multiconjugate adaptive optics (MCAO). The approach is inspired from the classical scheme developed by Véran et al. [J. Opt. Soc. Am. A14, 3057 (1997)] and Flicker et al. [Astron. Astrophys.400, 1199 (2003)] and reconstructs the long exposure optical transfer function (OTF), i.e., the Fourier transformed PSF, as a product of separate long-exposure tip/tilt removed and tip/tilt OTFs, each estimated by postprocessing system and simulation telemetry data. Sample enclosed energy results assessing reconstruction accuracy are presented for the Thirty Meter Telescope LGS MCAO system currently under design and show that percent level absolute and differential photometry over a 30 arcsec diameter field of view are achievable provided the simulation model faithfully represents the real system.
Spong, Donald A
2016-06-20
AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.
NASA Astrophysics Data System (ADS)
Ratajczak, M.; Wężyk, P.
2015-12-01
Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x
NASA Astrophysics Data System (ADS)
Michoud, Clément; Baillifard, François; Harald Blikra, Lars; Derron, Marc-Henri; Jaboyedoff, Michel; Kristensen, Lene; Leva, Davide; Metzger, Richard; Rivolta, Carlo
2014-05-01
Terrestrial Laser Scanning and Ground-Based Radar Interferometry have changed our perception and interpretation of slope activities for the last 20 years and are now routinely used for monitoring and even early warning purposes. Terrestrial LiDAR allows indeed to model topography with very high point density, even in steep slopes, and to extract 3D displacements of rock masses by comparing successive datasets. GB-InSAR techniques are able to detect mm displacements over large areas. Nevertheless, both techniques suffer of some limitations. The precision of LiDAR devices actually limits its ability to monitor very slow-moving landslides, as well as by the dam resolution and the particular geometry (in azimuth/range) of GB-InSAR data may complicate their interpretations. To overcome those limitations, tools were produced to truly combine strong advantages of both techniques, by coupling high resolution geometrical data from terrestrial LiDAR or photogrammetry with high precision displacement time series from GB-InSAR. We thus developed a new exportation module into the processing chain of LiSAmobile (GB-InSAR) devices in order to wrap radar results from their particular geometry on high resolution 3D point clouds with cm mean point spacing. Furthermore, we also added new importation and visualization functionalities into Coltop3D (software for geological interpretations of laser scanning data) to display those results in 3D and even analyzing displacement time series. This new method has also been optimized to create as few and small files as possible and for time processing. Advantages of coupling terrestrial LiDAR and GB-InSAR data will be illustrated on the La Perraire instability, an active large rockslide involving frequent rockfalls and threatening inhabitant within the Val de Bagnes in the Swiss Alps. This rock mass, monitored by LiDAR and GPS since 2006, is huge enough and long-term movements are big (up to 1.6 m in 6 years) and complex enough to make
NASA Astrophysics Data System (ADS)
Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter
2015-09-01
Health monitoring of rotating structures such as wind turbines and helicopter rotors is generally performed using conventional sensors that provide a limited set of data at discrete locations near or on the hub. These sensors usually provide no data on the blades or inside them where failures might occur. Within this paper, an approach was used to extract the full-field dynamic strain on a wind turbine assembly subject to arbitrary loading conditions. A three-bladed wind turbine having 2.3-m long blades was placed in a semi-built-in boundary condition using a hub, a machining chuck, and a steel block. For three different test cases, the turbine was excited using (1) pluck testing, (2) random impacts on blades with three impact hammers, and (3) random excitation by a mechanical shaker. The response of the structure to the excitations was measured using three-dimensional point tracking. A pair of high-speed cameras was used to measure displacement of optical targets on the structure when the blades were vibrating. The measured displacements at discrete locations were expanded and applied to the finite element model of the structure to extract the full-field dynamic strain. The results of the paper show an excellent correlation between the strain predicted using the proposed approach and the strain measured with strain-gages for each of the three loading conditions. The approach used in this paper to predict the strain showed higher accuracy than the digital image correlation technique. The new expansion approach is able to extract dynamic strain all over the entire structure, even inside the structure beyond the line of sight of the measurement system. Because the method is based on a non-contacting measurement approach, it can be readily applied to a variety of structures having different boundary and operating conditions, including rotating blades.
Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya
2007-07-20
This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.
NASA Astrophysics Data System (ADS)
Moore, Gregory F.
2009-05-01
This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.
NASA Astrophysics Data System (ADS)
Oldham, Mark
2015-01-01
Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.
NASA Technical Reports Server (NTRS)
Plaut, Jeffrey J.
1993-01-01
Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.
Moncayo, Roy; Rudisch, Ansgar; Kremser, Christian; Moncayo, Helga
2007-01-01
Background A conceptual model of lateral muscular tension in patients presenting thyroid associated ophthalmopathy (TAO) has been recently described. Clinical improvement has been achieved by using acupuncture on points belonging to the so-called extraordinary meridians. The aim of this study was to characterize the anatomical structures related to these acupuncture points by means of 3D MRI image rendering relying on external markers. Methods The investigation was carried out the index case patient of the lateral tension model. A licensed medical acupuncture practitioner located the following acupuncture points: 1) Yin qiao mai meridian (medial ankle): Kidney 3, Kidney 6, the plantar Kidney 6 (Nan jing description); 2) Yang qiao mai meridian (lateral ankle): Bladder 62, Bladder 59, Bladder 61, and the plantar Bladder 62 (Nan jing description); 3) Dai mai meridian (wait): Liver 13, Gall bladder 26, Gall bladder 27, Gall bladder 28, and Gall bladder 29. The points were marked by taping a nitro-glycerin capsule on the skin. Imaging was done on a Siemens Magnetom Avanto MR scanner using an array head and body coil. Mainly T1-weighted imaging sequences, as routinely used for patient exams, were used to obtain multi-slice images. The image data were rendered in 3D modus using dedicated software (Leonardo, Siemens). Results Points of the Dai mai meridian – at the level of the waist – corresponded to the obliquus externus abdominis and the obliquus internus abdominis. Points of the Yin qiao mai meridian – at the medial side of the ankle – corresponded to tendinous structures of the flexor digitorum longus as well as to muscular structures of the abductor hallucis on the foot sole. Points of the Yang qiao mai meridian – at the lateral side of the ankle – corresponded to tendinous structures of the peroneus brevis, the peroneous longus, and the lateral surface of the calcaneus and close to the foot sole to the abductor digiti minimi. Conclusion This non
Schaefferkoetter, Joshua; Ouyang, Jinsong; Rakvongthai, Yothin; El Fakhri, Georges; Nappi, Carmela
2014-06-15
Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as compared to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.
NASA Astrophysics Data System (ADS)
Sung, Hsin-Yueh; Yang, Sidney S.; Chang, Horng
2008-08-01
Due to the application of mobile phone lens, the clear image for the different object distance from infinity to close-up creates a new bargaining. We found that wave-front coding applied to extend the depth of field may solve this problem. By means of using cubic phase mask (CPM), the blurred point-spread function (PSF) is substantially invariant to defocus. Thus, the ideal hyperfocal distance condition can be satisfied as long as the constant blurred image can eventually be recovered by a simple digital signal processing. In this paper, we propose a different design method of computational imaging lens for mobile phone up to ideal depth of field based on PSF focus invariance. Because of the difficulty for comparing the similarity to different PSFs, we define a new metric, of correlation, to evaluate and optimize the PSF similarity. Besides, by means of adding the anti-symmetric free form phase plate at aperture stop and using the correlation and Strehl ratio as the two major optimization operands, we can get the optimum phase plate surface to achieve the required extended depth of field (EDoF). The resulted PSF on focal plane is significantly invariant to object distance varying from infinity to 10cm.
NASA Astrophysics Data System (ADS)
Abdu, M. A.; Kherani, E. A.; Batista, I. S.; Reinisch, B. W.; Sobral, J. H. A.
2014-01-01
better understanding of the precursor conditions for the instability growth is very important for identifying the causes of day-to-day variability in the equatorial spread F (ESF)/plasma bubble irregularity development. We investigate here the satellite trace (S-trace) in the ionograms, a precursor to the postsunset ESF occurrence, as observed by Digisondes operated at an equatorial and two magnetic conjugate sites in Brazil during a 66 day observational campaign (Conjugate Point Equatorial Experiment 2002). The satellite traces first occur at the equatorial site, and sequentially, after a variable delay of approximately 20 to 50 min, they are observed nearly simultaneously over the two conjugate sites. The evening prereversal enhancement in the zonal electric field/vertical drift is found to control its development. Using a three-dimensional simulation code based on collisional interchange instability mechanism, it is shown that the observed S-trace occurrence sequence is fully consistent with the instability initiation over the equator with the field-aligned plasma depletion vertical growth marked by latitudinal expansion of its extremities to conjugate locations. The delay in the S-trace occurrence at the conjugate sites (a measure of the nonlinear growth of the instability for plasma depletion) is controlled also by field line parallel (meridional) neutral wind. The relationship between the S-trace and the large-scale wave structure in the F layer, another widely known characterization of the precursor condition for the ESF development, is also clarified.
Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp
2012-04-01
We developed a new method (E-HOLICs) of estimating gravitational shear by adopting an elliptical weight function to measure background galaxy images in our previous paper. Following the previous paper, in which an isotropic point-spread function (PSF) correction is calculated, in this paper we consider an anisotropic PSF correction in order to apply E-HOLICs to real data. As an example, E-HOLICs is applied to Subaru data of the massive and compact galaxy cluster A370 and is able to detect double peaks in the central region of the cluster consistent with the analysis of strong lensing. We also study the systematic error in E-HOLICs using STEP2 simulation. In particular, we consider the dependences of the signal-to-noise ratio (S/N) of background galaxies in the shear estimation. Although E-HOLICs does improve the systematic error due to the ellipticity dependence as shown in Paper I, a systematic error due to the S/N dependence remains, namely, E-HOLICs underestimates shear when background galaxies with low S/N objects are used. We discuss a possible improvement of the S/N dependence.
Navarro, R; Losada, M A
1995-11-01
A recent study has shown that the double-pass method provides a good estimate of the ocular modulation transfer function (MTF) but that it does not yield the phase transfer function (PTF) [J. Opt. Soc. Am. A 12, 195 (1995)]. Therefore, one cannot recover the true retinal point-spread function (PSF). We present a modification of the double-pass method to overcome this problem. The key is to break the symmetry between the two passes. By using an unexpanded Gaussian input beam, we produce a diffraction-limited PSF for the first passes. Then, by using a large exit pupil, we get an aberrated PSF for the second pass. The double-pass aerial image is the cross correlation of both PSF's, so that the Fourier transform of such an aerial image directly provides the true retinal PTF, up to the cutoff frequency of the effective (small), diffraction-limited entrance pupil. The resulting double-pass aerial image is a blurred version of the true retinal PSF. Thus it shows the effect not only of even symmetric aberrations but also of odd and irregular aberrations such as coma. We have explored two different ways to retrieve the true retinal PSF: (a) deblurring of the aerial image and (b) PSF reconstruction combining PTF data with conventional double-pass MTF. We present promising initial results with both artificial and real eyes.
Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.
Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing
2012-11-01
In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.
3D change detection - Approaches and applications
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter
2016-12-01
Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.
Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M Saiful
2011-01-01
The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 ± 4.3 Gy. This is significantly higher (p < 0.0001) than the mean value of the dose to Point A (78.6 ± 4.4 Gy). The dose levels of the OARs were within acceptable limits for most patients. The mean dose to 2 mL of bladder was 78.0 ± 6.2 Gy, whereas the mean dose to rectum and sigmoid were 57.2 ± 4.4 Gy and 66.9 ± 6.1 Gy, respectively. Image-based 3D brachytherapy provides adequate dose coverage to HRCTV, with acceptable dose to OARs in most patients. Dose to Point A was found to be significantly lower than the D90 for HRCTV calculated using the image-based technique. Paradigm shift from 2D point dose dosimetry to IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities.
Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M. Saiful
2011-07-01
The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 {+-} 4.3 Gy. This is significantly higher (p < 0.0001) than the mean value of the dose to Point A (78.6 {+-} 4.4 Gy). The dose levels of the OARs were within acceptable limits for most patients. The mean dose to 2 mL of bladder was 78.0 {+-} 6.2 Gy, whereas the mean dose to rectum and sigmoid were 57.2 {+-} 4.4 Gy and 66.9 {+-} 6.1 Gy, respectively. Image-based 3D brachytherapy provides adequate dose coverage to HRCTV, with acceptable dose to OARs in most patients. Dose to Point A was found to be significantly lower than the D90 for HRCTV calculated using the image-based technique. Paradigm shift from 2D point dose dosimetry to IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities.
Fast and Precise 3D Fluorophore Localization based on Gradient Fitting
Ma, Hongqiang; Xu, Jianquan; Jin, Jingyi; Gao, Ying; Lan, Li; Liu, Yang
2015-01-01
Astigmatism imaging approach has been widely used to encode the fluorophore’s 3D position in single-particle tracking and super-resolution localization microscopy. Here, we present a new high-speed localization algorithm based on gradient fitting to precisely decode the 3D subpixel position of the fluorophore. This algebraic algorithm determines the center of the fluorescent emitter by finding the position with the best-fit gradient direction distribution to the measured point spread function (PSF), and can retrieve the 3D subpixel position of the fluorophore in a single iteration. Through numerical simulation and experiments with mammalian cells, we demonstrate that our algorithm yields comparable localization precision to the traditional iterative Gaussian function fitting (GF) based method, while exhibits over two orders-of-magnitude faster execution speed. Our algorithm is a promising high-speed analyzing method for 3D particle tracking and super-resolution localization microscopy. PMID:26390959
Bhandare, N.
2014-06-01
Purpose: To estimate and compare the doses received by the obturator, external and internal iliac lymph nodes and point Methods: CT-MR fused image sets of 15 patients obtained for each of 5 fractions of HDR brachytherapy using tandem and ring applicator, were used to generate treatment plans optimized to deliver a prescription dose to HRCTV-D90 and to minimize the doses to organs at risk (OARs). For each set of image, target volume (GTV, HRCTV) OARs (Bladder, Rectum, Sigmoid), and both left and right pelvic lymph nodes (obturator, external and internal iliac lymph nodes) were delineated. Dose-volume histograms (DVH) were generated for pelvic nodal groups (left and right obturator group, internal and external iliac chains) Per fraction DVH parameters used for dose comparison included dose to 100% volume (D100), and dose received by 2cc (D2cc), 1cc (D1cc) and 0.1 cc (D0.1cc) of nodal volume. Dose to point B was compared with each DVH parameter using 2 sided t-test. Pearson correlation were determined to examine relationship of point B dose with nodal DVH parameters. Results: FIGO clinical stage varied from 1B1 to IIIB. The median pretreatment tumor diameter measured on MRI was 4.5 cm (2.7– 6.4cm).The median dose to bilateral point B was 1.20 Gy ± 0.12 or 20% of the prescription dose. The correlation coefficients were all <0.60 for all nodal DVH parameters indicating low degree of correlation. Only 2 cc of obturator nodes was not significantly different from point B dose on t-test. Conclusion: Dose to point B does not adequately represent the dose to any specific pelvic nodal group. When using image guided 3D dose-volume optimized treatment nodal groups should be individually identified and delineated to obtain the doses received by pelvic nodes.
NASA Astrophysics Data System (ADS)
Kuntzer, T.; Courbin, F.; Meylan, G.
2016-02-01
The next generation of space-based telescopes used for weak lensing surveys will require exquisite point spread function (PSF) determination. Previously negligible effects may become important in the reconstruction of the PSF, in part because of the improved spatial resolution. In this paper, we show that unresolved multiple star systems can affect the ellipticity and size of the PSF and that this effect is not cancelled even when using many stars in the reconstruction process. We estimate the error in the reconstruction of the PSF due to the binaries in the star sample both analytically and with image simulations for different PSFs and stellar populations. The simulations support our analytical finding that the error on the size of the PSF is a function of the multiple stars distribution and of the intrinsic value of the size of the PSF, i.e. if all stars were single. Similarly, the modification of each of the complex ellipticity components (e1,e2) depends on the distribution of multiple stars and on the intrinsic complex ellipticity. Using image simulations, we also show that the predicted error in the PSF shape is a theoretical limit that can be reached only if large number of stars (up to thousands) are used together to build the PSF at any desired spatial position. For a lower number of stars, the PSF reconstruction is worse. Finally, we compute the effect of binarity for different stellar magnitudes and show that bright stars alter the PSF size and ellipticity more than faint stars. This may affect the design of PSF calibration strategies and the choice of the related calibration fields.
NASA Astrophysics Data System (ADS)
Okura, Yuki; Futamase, Toshifumi
2011-03-01
We develop a new method of estimating gravitational shear by adopting an elliptical weight function to measure background galaxy images. In doing so, we introduce the new concept of "zero plane," which is an imaginary source plane where shapes of all sources are perfect circles, and regard the intrinsic shear as the result of an imaginary lensing distortion. This makes the relation between the observed shear, intrinsic shear, and lensing distortion much simpler, and thus higher-order calculations are easier. The elliptical weight function allows us to measure the multipole moments of the shapes of background galaxies more precisely by weighting brighter parts of the image highly, and to reduce systematic error due to insufficient expansion of the weight function in the original approach of Kaiser et al. (KSB). Point-spread function (PSF) correction in the elliptically weighted higher-order lensing image characteristics (E-HOLICs) method becomes more complicated than in the KSB method. In this paper, we study isotropic PSF correction in detail. By adopting the lensing distortion as the ellipticity of the weight function, we are able to show that the shear estimation in the E-HOLICs method reduces to solve a polynomial in the absolute magnitude of the distortion. We compare the systematic errors between our approach and that of KSB using the Shear Testing Programme 2 simulation. It is confirmed that the KSB method overestimates the input shear for images with large ellipticities, and E-HOLICs correctly estimates the input shear even for such images. Anisotropic PSF correction and analysis of real data will be presented in a forthcoming paper.
MeV gamma-ray observation with a well-defined point spread function based on electron tracking
NASA Astrophysics Data System (ADS)
Takada, A.; Tanimori, T.; Kubo, H.; Mizumoto, T.; Mizumura, Y.; Komura, S.; Kishimoto, T.; Takemura, T.; Yoshikawa, K.; Nakamasu, Y.; Matsuoka, Y.; Oda, M.; Miyamoto, S.; Sonoda, S.; Tomono, D.; Miuchi, K.; Kurosawa, S.; Sawano, T.
2016-07-01
The field of MeV gamma-ray astronomy has not opened up until recently owing to imaging difficulties. Compton telescopes and coded-aperture imaging cameras are used as conventional MeV gamma-ray telescopes; however their observations are obstructed by huge background, leading to uncertainty of the point spread function (PSF). Conventional MeV gamma-ray telescopes imaging utilize optimizing algorithms such as the ML-EM method, making it difficult to define the correct PSF, which is the uncertainty of a gamma-ray image on the celestial sphere. Recently, we have defined and evaluated the PSF of an electron-tracking Compton camera (ETCC) and a conventional Compton telescope, and thereby obtained an important result: The PSF strongly depends on the precision of the recoil direction of electron (scatter plane deviation, SPD) and is not equal to the angular resolution measure (ARM). Now, we are constructing a 30 cm-cubic ETCC for a second balloon experiment, Sub-MeV gamma ray Imaging Loaded-on-balloon Experiment: SMILE-II. The current ETCC has an effective area of 1 cm2 at 300 keV, a PSF of 10° at FWHM for 662 keV, and a large field of view of 3 sr. We will upgrade this ETCC to have an effective area of several cm2 and a PSF of 5° using a CF4-based gas. Using the upgraded ETCC, our observation plan for SMILE-II is to map of the electron-positron annihilation line and the 1.8 MeV line from 26Al. In this paper, we will report on the current performance of the ETCC and on our observation plan.
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV
NASA Technical Reports Server (NTRS)
1997-01-01
The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.
The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.
Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right
NASA Astrophysics Data System (ADS)
Fung, Y. C.
1995-05-01
This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.
NASA Technical Reports Server (NTRS)
1992-01-01
Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.
Zapata-Rodríguez, Carlos J; Pastor, David; Miret, Juan J
2010-10-20
We derive a nonsingular, polarization-dependent, 3D impulse response that provides unambiguously the wave field scattered by a negative-refractive-index layered lens and distributed in its image volume. By means of a 3D Fourier transform, we introduce the generalized amplitude transfer function in order to gain a deep insight into the resolution power of the optical element. In the near-field regime, fine details containing some depth information may be transmitted through the lens. We show that metamaterials with moderate absorption are appropriate for subwavelength resolution keeping a limited degree of depth discrimination.
Interactive 3D Mars Visualization
NASA Technical Reports Server (NTRS)
Powell, Mark W.
2012-01-01
The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.
Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael
2009-01-01
This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308
NASA Technical Reports Server (NTRS)
1997-01-01
An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.
Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right
3D imaging of semiconductor colloid nanocrystals: on the way to nanodiagnostics of track membranes
NASA Astrophysics Data System (ADS)
Kulyk, S. I.; Eremchev, I. Y.; Gorshelev, A. A.; Naumov, A. V.; Zagorsky, D. L.; Kotova, S. P.; Volostnikov, V. G.; Vorontsov, E. N.
2016-12-01
The work concerns the feasibility of 3D optical diagnostic of porous media with subdifraction spatial resolution via epi-luminescence microscopy of single semiconductor colloid nanocrystals (quantum dots, QD) CdSe/ZnS used as emitting labels/nanoprobes. The nanoprecise reconstruction of axial coordinate is provided by double helix technique of point spread function transformation (DH-PSF). The results of QD localization in polycarbonate track membrane (TM) is presented.
Principle and characteristics of 3D display based on random source constructive interference.
Li, Zhiyang
2014-07-14
The paper discusses the principle and characteristics of 3D display based on random source constructive interference (RSCI). The voxels of discrete 3D images are formed in the air via constructive interference of spherical light waves emitted by point light sources (PLSs) that are arranged at random positions to depress high order diffraction. The PLSs might be created by two liquid crystal panels sandwiched between two micro-lens arrays. The point spread function of the system revealed that it is able to reconstruct voxels with diffraction limited resolution over a large field width and depth. The high resolution was confirmed by the experiments. Theoretical analyses also shows that the system could provide a 3D image contrast and gray levels no less than that in liquid crystal panels. Compared with 2D display, it needs only additional depth information, which brings only about 30% data increment.
NASA Astrophysics Data System (ADS)
Raimondi, L.; Spiga, D.
2015-01-01
Context. The imaging sharpness of an X-ray telescope is chiefly determined by the optical quality of its focusing optics, which in turn mostly depends on the shape accuracy and the surface finishing of the grazing-incidence X-ray mirrors that compose the optical modules. To ensure the imaging performance during the mirror manufacturing, a fundamental step is predicting the mirror point spread function (PSF) from the metrology of its surface. Traditionally, the PSF computation in X-rays is assumed to be different depending on whether the surface defects are classified as figure errors or roughness. This classical approach, however, requires setting a boundary between these two asymptotic regimes, which is not known a priori. Aims: The aim of this work is to overcome this limit by providing analytical formulae that are valid at any light wavelength, for computing the PSF of an X-ray mirror shell from the measured longitudinal profiles and the roughness power spectral density, without distinguishing spectral ranges with different treatments. Methods: The method we adopted is based on the Huygens-Fresnel principle for computing the diffracted intensity from measured or modeled profiles. In particular, we have simplified the computation of the surface integral to only one dimension, owing to the grazing incidence that reduces the influence of the azimuthal errors by orders of magnitude. The method can be extended to optical systems with an arbitrary number of reflections - in particular the Wolter-I, which is frequently used in X-ray astronomy - and can be used in both near- and far-field approximation. Finally, it accounts simultaneously for profile, roughness, and aperture diffraction. Results: We describe the formalism with which one can self-consistently compute the PSF of grazing-incidence mirrors, and we show some PSF simulations including the UV band, where the aperture diffraction dominates the PSF, and hard X-rays where the X-ray scattering has a major impact
Caspi, S.; Helm, M.; Laslett, L.J.
1991-03-30
We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.
Validation of image processing tools for 3-D fluorescence microscopy.
Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric
2002-04-01
3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.
2007-11-02
tension rods will be drilled prior to assembly. fixing AFA Point 10 gap 30 grillage 35 Detail at floor floor User cabinet ANS&A to consider providing...duplicate mount for AFA connector panels. ANS&A would remove AFA panel and mount in user cabinet. Termination panels AFA propose mounting points on...clearance holes in the boom divider for AFA . boom divider At the centre of the arm, the cable trays terminate against the drive box. Three cable support rails
NASA Technical Reports Server (NTRS)
2004-01-01
This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.
Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.
On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.
The image mosaic is about 6 centimeters (2.4 inches) across.
NASA Technical Reports Server (NTRS)
1997-01-01
Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.
Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right
Origin of chaos in 3-d Bohmian trajectories
NASA Astrophysics Data System (ADS)
Tzemos, Athanasios C.; Contopoulos, George; Efthymiopoulos, Christos
2016-11-01
We study the 3-d Bohmian trajectories of a quantum system of three harmonic oscillators. We focus on the mechanism responsible for the generation of chaotic trajectories. We demonstrate the existence of a 3-d analogue of the mechanism found in earlier studies of 2-d systems [1,2], based on moving 2-d 'nodal point-X-point complexes'. In the 3-d case, we observe a foliation of nodal point-X-point complexes, forming a '3-d structure of nodal and X-points'. Chaos is generated when the Bohmian trajectories are scattered at one or more close encounters with such a structure.
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; Trampert, Jeannot
2016-08-01
Despite the ever increasing availability of computational power, real-time source inversions based on physical modeling of wave propagation in realistic media remain challenging. We investigate how a nonlinear Bayesian approach based on pattern recognition and synthetic 3-D Green's functions can be used to rapidly invert strong-motion data for point source parameters by means of a case study for a fault system in the Los Angeles Basin. The probabilistic inverse mapping is represented in compact form by a neural network which yields probability distributions over source parameters. It can therefore be evaluated rapidly and with very moderate CPU and memory requirements. We present a simulated real-time inversion of data for the 2008 Mw 5.4 Chino Hills event. Initial estimates of epicentral location and magnitude are available ˜14 s after origin time. The estimate can be refined as more data arrive: by ˜40 s, fault strike and source depth can also be determined with relatively high certainty.
NASA Astrophysics Data System (ADS)
Jia, Peng; Cai, Dongmei; Wang, Dong
2014-11-01
A parallel blind deconvolution algorithm is presented. The algorithm contains the constraints of the point spread function (PSF) derived from the physical process of the imaging. Additionally, in order to obtain an effective restored image, the fractal energy ratio is used as an evaluation criterion to estimate the quality of the image. This algorithm is fine-grained parallelized to increase the calculation speed. Results of numerical experiments and real experiments indicate that this algorithm is effective.
NASA Astrophysics Data System (ADS)
Hedan, Stéphen; Valle, Valéry; Cottron, Mario
2007-04-01
We propose to use an optical method to define the area of the 3D effects and transient zone near the crack tip during crack propagation in brittle materials (PMMA). For the experimental data, we measure the out-of-plane displacements field by using the interferometry on SEN (Single Edge Notch) specimens loaded in mode I with a constant loading σ. We compare the experimental data of out-of-plane displacements with a theoretical 2D solution and we propose a 3D formulation. The 2D solution characterizes out-of-plane displacements in plane stress. The proposed 3D expression is based on works of the literature relating to the presence of the 3D effects for stationary cracks. In our case, during crack propagation, the 3D effects are always present, but it is also necessary to take into account of the transient effects. The presence of 3D and transient effects results in a progressive gap between the 2D solution and the 3D formulation when we approach the crack tip. So, by a study of the detachment between the both expressions, we can determine the area of the 3D and transient effects zone according to the crack propagation velocity. Results are shown for one static test and two dynamic tests. The analysis of the results shows that the detachment zone of the two expressions is large and proportional to the crack propagation velocity. To cite this article: S. Hedan et al., C. R. Mecanique 335 (2007).
Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images
NASA Astrophysics Data System (ADS)
Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko
2008-03-01
The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).
Auto convergence for stereoscopic 3D cameras
NASA Astrophysics Data System (ADS)
Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit
2012-03-01
Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.
Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.
Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio
2014-07-05
A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems.
Touil, Basma; Basarab, Adrian; Delachartre, Philippe; Bernard, Olivier; Friboulet, Denis
2010-03-01
This paper focuses on motion tracking in echocardiographic ultrasound images. The difficulty of this task is related to the fact that echographic image formation induces decorrelation between the underlying motion of tissue and the observed speckle motion. Since Meunier's seminal work, this phenomenon has been investigated in many simulation studies as part of speckle tracking or optical flow-based motion estimation techniques. Most of these studies modeled image formation using a linear convolution approach, where the system point-spread function (PSF) was spatially invariant and the probe geometry was linear. While these assumptions are valid over a small spatial area, they constitute an oversimplification when a complete image is considered. Indeed, echocardiographic acquisition geometry relies on sectorial probes and the system PSF is not perfectly invariant, even if dynamic focusing is performed. This study investigated the influence of sectorial geometry and spatially varying PSF on speckle tracking. This was done by simulating a typical 64 elements, cardiac probe operating at 3.5 MHz frequency, using the simulation software Field II. This simulation first allowed quantification of the decorrelation induced by the system between two images when simple motion such as translation or incompressible deformation was applied. We then quantified the influence of decorrelation on speckle tracking accuracy using a conventional block matching (BM) algorithm and a bilinear deformable block matching (BDBM) algorithm. In echocardiography, motion estimation is usually performed on reconstructed images where the initial sectorial (i.e., polar) data are interpolated on a cartesian grid. We therefore studied the influence of sectorial acquisition geometry, by performing block matching on cartesian and polar data. Simulation results show that decorrelation is spatially variant and depends on the position of the region where motion takes place relative to the probe. Previous
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
NASA Astrophysics Data System (ADS)
Strasser, U.; Marke, T.
2010-05-01
This paper describes the spreadsheet-based point energy balance model ESCIMO.spread which simulates the energy and mass balance as well as melt rates of a snow surface. The model makes use of hourly recordings of temperature, precipitation, wind speed, relative humidity, global and longwave radiation. The effect of potential climate change on the seasonal evolution of the snow cover can be estimated by modifying the time series of observed temperature and precipitation by means of adjustable parameters. Model output is graphically visualized in hourly and daily diagrams. The results compare well with weekly measured snow water equivalent (SWE). The model is easily portable and adjustable, and runs particularly fast: hourly calculation of a one winter season is instantaneous on a standard computer. ESICMO.spread can be obtained from the authors on request (contact: ulrich.strasser@uni-graz.at).
NASA Astrophysics Data System (ADS)
Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco
2011-09-01
Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.
Spherical 3D isotropic wavelets
NASA Astrophysics Data System (ADS)
Lanusse, F.; Rassat, A.; Starck, J.-L.
2012-04-01
Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html
3D Elevation Program—Virtual USA in 3D
Lukas, Vicki; Stoker, J.M.
2016-04-14
The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.
Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio
2013-05-01
With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.
CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D
NASA Astrophysics Data System (ADS)
Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.
2015-08-01
Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under
None
2016-07-12
This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.
NASA Astrophysics Data System (ADS)
van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin
2014-03-01
We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.
2013-10-30
This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.
2013-10-01
Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.
NASA Technical Reports Server (NTRS)
1977-01-01
A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A
2015-12-01
3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.
3-D Perspective Pasadena, California
NASA Technical Reports Server (NTRS)
2000-01-01
This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency
Tracking earthquake source evolution in 3-D
NASA Astrophysics Data System (ADS)
Kennett, B. L. N.; Gorbatov, A.; Spiliopoulos, S.
2014-08-01
Starting from the hypocentre, the point of initiation of seismic energy, we seek to estimate the subsequent trajectory of the points of emission of high-frequency energy in 3-D, which we term the `evocentres'. We track these evocentres as a function of time by energy stacking for putative points on a 3-D grid around the hypocentre that is expanded as time progresses, selecting the location of maximum energy release as a function of time. The spatial resolution in the neighbourhood of a target point can be simply estimated by spatial mapping using the properties of isochrons from the stations. The mapping of a seismogram segment to space is by inverse slowness, and thus more distant stations have a broader spatial contribution. As in hypocentral estimation, the inclusion of a wide azimuthal distribution of stations significantly enhances 3-D capability. We illustrate this approach to tracking source evolution in 3-D by considering two major earthquakes, the 2007 Mw 8.1 Solomons islands event that ruptured across a plate boundary and the 2013 Mw 8.3 event 610 km beneath the Sea of Okhotsk. In each case we are able to provide estimates of the evolution of high-frequency energy that tally well with alternative schemes, but also to provide information on the 3-D characteristics that is not available from backprojection from distant networks. We are able to demonstrate that the major characteristics of event rupture can be captured using just a few azimuthally distributed stations, which opens the opportunity for the approach to be used in a rapid mode immediately after a major event to provide guidance for, for example tsunami warning for megathrust events.
Methods for comparing 3D surface attributes
NASA Astrophysics Data System (ADS)
Pang, Alex; Freeman, Adam
1996-03-01
A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Unassisted 3D camera calibration
NASA Astrophysics Data System (ADS)
Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.
2012-03-01
With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.
2007-11-02
AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems
3D-GNOME: an integrated web service for structural modeling of the 3D genome.
Szalaj, Przemyslaw; Michalski, Paul J; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz
2016-07-08
Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/.
3D-GNOME: an integrated web service for structural modeling of the 3D genome
Szalaj, Przemyslaw; Michalski, Paul J.; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz
2016-01-01
Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/. PMID:27185892
Superplot3d: an open source GUI tool for 3d trajectory visualisation and elementary processing.
Whitehorn, Luke J; Hawkes, Frances M; Dublon, Ian An
2013-09-30
When acquiring simple three-dimensional (3d) trajectory data it is common to accumulate large coordinate data sets. In order to examine integrity and consistency of object tracking, it is often necessary to rapidly visualise these data. Ordinarily, to achieve this the user must either execute 3d plotting functions in a numerical computing environment or manually inspect data in two dimensions, plotting each individual axis.Superplot3d is an open source MATLAB script which takes tab delineated Cartesian data points in the form x, y, z and time and generates an instant visualization of the object's trajectory in free-rotational three dimensions. Whole trajectories may be instantly presented, allowing for rapid inspection. Executable from the MATLAB command line (or deployable as a compiled standalone application) superplot3d also provides simple GUI controls to obtain rudimentary trajectory information, allow specific visualization of trajectory sections and perform elementary processing.Superplot3d thus provides a framework for non-programmers and programmers alike, to recreate recently acquired 3d object trajectories in rotatable 3d space. It is intended, via the use of a preference driven menu to be flexible and work with output from multiple tracking software systems. Source code and accompanying GUIDE .fig files are provided for deployment and further development.
PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P. G.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik
2011-01-01
We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.
NASA Astrophysics Data System (ADS)
Lee-Elkin, Forest
2008-04-01
Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.
Combinatorial 3D Mechanical Metamaterials
NASA Astrophysics Data System (ADS)
Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin
2015-03-01
We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.
MAP3D: a media processor approach for high-end 3D graphics
NASA Astrophysics Data System (ADS)
Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris
1999-12-01
Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.
Reproducibility of 3D chromatin configuration reconstructions
Segal, Mark R.; Xiong, Hao; Capurso, Daniel; Vazquez, Mariel; Arsuaga, Javier
2014-01-01
It is widely recognized that the three-dimensional (3D) architecture of eukaryotic chromatin plays an important role in processes such as gene regulation and cancer-driving gene fusions. Observing or inferring this 3D structure at even modest resolutions had been problematic, since genomes are highly condensed and traditional assays are coarse. However, recently devised high-throughput molecular techniques have changed this situation. Notably, the development of a suite of chromatin conformation capture (CCC) assays has enabled elicitation of contacts—spatially close chromosomal loci—which have provided insights into chromatin architecture. Most analysis of CCC data has focused on the contact level, with less effort directed toward obtaining 3D reconstructions and evaluating the accuracy and reproducibility thereof. While questions of accuracy must be addressed experimentally, questions of reproducibility can be addressed statistically—the purpose of this paper. We use a constrained optimization technique to reconstruct chromatin configurations for a number of closely related yeast datasets and assess reproducibility using four metrics that measure the distance between 3D configurations. The first of these, Procrustes fitting, measures configuration closeness after applying reflection, rotation, translation, and scaling-based alignment of the structures. The others base comparisons on the within-configuration inter-point distance matrix. Inferential results for these metrics rely on suitable permutation approaches. Results indicate that distance matrix-based approaches are preferable to Procrustes analysis, not because of the metrics per se but rather on account of the ability to customize permutation schemes to handle within-chromosome contiguity. It has recently been emphasized that the use of constrained optimization approaches to 3D architecture reconstruction are prone to being trapped in local minima. Our methods of reproducibility assessment provide a
PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
NASA Astrophysics Data System (ADS)
Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.
2014-08-01
In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers
Tilted planes in 3D image analysis
NASA Astrophysics Data System (ADS)
Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza
1998-03-01
Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.
PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into
YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters
NASA Astrophysics Data System (ADS)
Schild, Jonas; Seele, Sven; Masuch, Maic
2012-03-01
Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.
NASA Technical Reports Server (NTRS)
2002-01-01
In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.
Finding Organized Structures in 3-D LADAR Data
2004-12-01
work exists also on how to extract planar and linear objects from scattered 3-D point clouds , see for example [5], [6]. Methods were even proposed to...of structure detection and segmentation from 3-D point clouds collected from a single sensor location or integrated from multiple locations. In [2...primitives to point clouds are difficult to use practically for large data sets containing multiple complex structures, in opposition to multiple planar
On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques
NASA Astrophysics Data System (ADS)
Blundell, Barry G.
2015-06-01
In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.
What is 3D good for? A review of human performance on stereoscopic 3D displays
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.
2012-06-01
This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.
Precise 3D image alignment in micro-axial tomography.
Matula, P; Kozubek, M; Staier, F; Hausmann, M
2003-02-01
Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the
Van Goethem, Emeline; Guiet, Romain; Balor, Stéphanie; Charrière, Guillaume M; Poincloux, Renaud; Labrousse, Arnaud; Maridonneau-Parini, Isabelle; Le Cabec, Véronique
2011-01-01
Macrophage tissue infiltration is a critical step in the immune response against microorganisms and is also associated with disease progression in chronic inflammation and cancer. Macrophages are constitutively equipped with specialized structures called podosomes dedicated to extracellular matrix (ECM) degradation. We recently reported that these structures play a critical role in trans-matrix mesenchymal migration mode, a protease-dependent mechanism. Podosome molecular components and their ECM-degrading activity have been extensively studied in two dimensions (2D), but yet very little is known about their fate in three-dimensional (3D) environments. Therefore, localization of podosome markers and proteolytic activity were carefully examined in human macrophages performing mesenchymal migration. Using our gelled collagen I 3D matrix model to obligate human macrophages to perform mesenchymal migration, classical podosome markers including talin, paxillin, vinculin, gelsolin, cortactin were found to accumulate at the tip of F-actin-rich cell protrusions together with β1 integrin and CD44 but not β2 integrin. Macrophage proteolytic activity was observed at podosome-like protrusion sites using confocal fluorescence microscopy and electron microscopy. The formation of migration tunnels by macrophages inside the matrix was accomplished by degradation, engulfment and mechanic compaction of the matrix. In addition, videomicroscopy revealed that 3D F-actin-rich protrusions of migrating macrophages were as dynamic as their 2D counterparts. Overall, the specifications of 3D podosomes resembled those of 2D podosome rosettes rather than those of individual podosomes. This observation was further supported by the aspect of 3D podosomes in fibroblasts expressing Hck, a master regulator of podosome rosettes in macrophages. In conclusion, human macrophage podosomes go 3D and take the shape of spherical podosome rosettes when the cells perform mesenchymal migration. This work
3D Printed Bionic Nanodevices.
Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C
2016-06-01
The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the
Multislice diffusion mapping for 3-D evolution of cerebral ischemia in a rat stroke model.
Reith, W; Hasegawa, Y; Latour, L L; Dardzinski, B J; Sotak, C H; Fisher, M
1995-01-01
Diffusion-weighted magnetic resonance imaging (DWI) can quantitatively demonstrate cerebral ischemia within minutes after the onset of ischemia. The use of a DWI echo-planar multislice technique in this study and the mapping of the apparent diffusion coefficient (ADC) of water, a reliable indicator of ischemic regions, allow for the detection of the three-dimensional (3-D) evolution of ischemia in a rat stroke model. We evaluated 13 time points from 5 to 180 minutes after occlusion of the middle cerebral artery (MCA) and monitored the 3-D spread of ischemia. Within 5 minutes after the onset of ischemia, regions with reduced ADC values occurred. The core of the lesion, with the lowest absolute ADC values, first appeared in the lateral caudoputamen and frontoparietal cortex, then spread to adjacent areas. The volume of ischemic tissue was 224 +/- 48.5 mm3 (mean +/- SEM) after 180 minutes, ranging from 92 to 320 mm3, and this correlated well with the corrected infarct volume at postmortem (194 +/- 23.1 mm3, r = 0.72, p < 0.05). This experiment demonstrated that 3-D multislice diffusion mapping can detect ischemic regions noninvasively 5 minutes after MCA occlusion and follow the development of ischemia. The distribution of changes in absolute ADC values within the ischemic region can be followed over time, giving important information about the evolution of focal ischemia.
NASA Technical Reports Server (NTRS)
1997-01-01
Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The metallic object at lower right is part of the lander's low-gain antenna. This image is part of a 3D 'monster
Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right
3D Computations and Experiments
Couch, R; Faux, D; Goto, D; Nikkel, D
2004-04-05
This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.
Multi-resolution optical 3D sensor
NASA Astrophysics Data System (ADS)
Kühmstedt, Peter; Heinze, Matthias; Schmidt, Ingo; Breitbarth, Martin; Notni, Gunther
2007-06-01
A new multi resolution self calibrating optical 3D measurement system using fringe projection technique named "kolibri FLEX multi" will be presented. It can be utilised to acquire the all around shape of small to medium objects, simultaneously. The basic measurement principle is the phasogrammetric approach /1,2,3/ in combination with the method of virtual landmarks for the merging of the 3D single views. The system consists in minimum of two fringe projection sensors. The sensors are mounted on a rotation stage illuminating the object from different directions. The measurement fields of the sensors can be chosen different, here as an example 40mm and 180mm in diameter. In the measurement the object can be scanned at the same time with these two resolutions. Using the method of virtual landmarks both point clouds are calculated within the same world coordinate system resulting in a common 3D-point cloud. The final point cloud includes the overview of the object with low point density (wide field) and a region with high point density (focussed view) at the same time. The advantage of the new method is the possibility to measure with different resolutions at the same object region without any mechanical changes in the system or data post processing. Typical parameters of the system are: the measurement time is 2min for 12 images and the measurement accuracy is below 3μm up to 10 μm. The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.
NASA Astrophysics Data System (ADS)
Moser, Christophe; Delrot, Paul; Loterie, Damien; Morales Delgado, Edgar; Modestino, Miguel; Psaltis, Demetri
2016-03-01
3D printing as a tool to generate complicated shapes from CAD files, on demand, with different materials from plastics to metals, is shortening product development cycles, enabling new design possibilities and can provide a mean to manufacture small volumes cost effectively. There are many technologies for 3D printing and the majority uses light in the process. In one process (Multi-jet modeling, polyjet, printoptical©), a printhead prints layers of ultra-violet curable liquid plastic. Here, each nozzle deposits the material, which is then flooded by a UV curing lamp to harden it. In another process (Stereolithography), a focused UV laser beam provides both the spatial localization and the photo-hardening of the resin. Similarly, laser sintering works with metal powders by locally melting the material point by point and layer by layer. When the laser delivers ultra-fast focused pulses, nonlinear effects polymerize the material with high spatial resolution. In these processes, light is either focused in one spot and the part is made by scanning it or the light is expanded and covers a wide area for photopolymerization. Hence a fairly "simple" light field is used in both cases. Here, we give examples of how "complex light" brings additional level of complexity in 3D printing.
Development of a 3D-AFM for true 3D measurements of nanostructures
NASA Astrophysics Data System (ADS)
Dai, Gaoliang; Häßler-Grohne, Wolfgang; Hüser, Dorothee; Wolff, Helmut; Danzebrink, Hans-Ulrich; Koenders, Ludger; Bosse, Harald
2011-09-01
The development of advanced lithography requires highly accurate 3D metrology methods for small line structures of both wafers and photomasks. Development of a new 3D atomic force microscopy (3D-AFM) with vertical and torsional oscillation modes is introduced in this paper. In its configuration, the AFM probe is oscillated using two piezo actuators driven at vertical and torsional resonance frequencies of the cantilever. In such a way, the AFM tip can probe the surface with a vertical and a lateral oscillation, offering high 3D probing sensitivity. In addition, a so-called vector approach probing (VAP) method has been applied. The sample is measured point-by-point using this method. At each probing point, the tip is approached towards the surface until the desired tip-sample interaction is detected and then immediately withdrawn from the surface. Compared to conventional AFMs, where the tip is kept continuously in interaction with the surface, the tip-sample interaction time using the VAP method is greatly reduced and consequently the tip wear is reduced. Preliminary experimental results show promising performance of the developed system. A measurement of a line structure of 800 nm height employing a super sharp AFM tip could be performed with a repeatability of its 3D profiles of better than 1 nm (p-v). A line structure of a Physikalisch-Technische Bundesanstalt photomask with a nominal width of 300 nm has been measured using a flared tip AFM probe. The repeatability of the middle CD values reaches 0.28 nm (1σ). A long-term stability investigation shows that the 3D-AFM has a high stability of better than 1 nm within 197 measurements taken over 30 h, which also confirms the very low tip wear.
ERIC Educational Resources Information Center
Mayshark, Robin K.
1991-01-01
Students explore three-dimensional properties by creating red and green wall decorations related to Christmas. Students examine why images seem to vibrate when red and green pieces are small and close together. Instructions to conduct the activity and construct 3-D glasses are given. (MDH)
3D Printing: Exploring Capabilities
ERIC Educational Resources Information Center
Samuels, Kyle; Flowers, Jim
2015-01-01
As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…
ERIC Educational Resources Information Center
Manos, Harry
2016-01-01
Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…
3D-printed microfluidic automation.
Au, Anthony K; Bhattacharjee, Nirveek; Horowitz, Lisa F; Chang, Tim C; Folch, Albert
2015-04-21
Microfluidic automation - the automated routing, dispensing, mixing, and/or separation of fluids through microchannels - generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology's use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer.
3D-Printed Microfluidic Automation
Au, Anthony K.; Bhattacharjee, Nirveek; Horowitz, Lisa F.; Chang, Tim C.; Folch, Albert
2015-01-01
Microfluidic automation – the automated routing, dispensing, mixing, and/or separation of fluids through microchannels – generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology’s use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer. PMID:25738695
Humphries, Benjamin; Zhang, Hansen; Sheng, Jiayi; Landaverde, Raphael; Herbordt, Martin C
2014-05-01
The 3D FFT is critical in many physical simulations and image processing applications. On FPGAs, however, the 3D FFT was thought to be inefficient relative to other methods such as convolution-based implementations of multi-grid. We find the opposite: a simple design, operating at a conservative frequency, takes 4μs for 16(3), 21μs for 32(3), and 215μs for 64(3) single precision data points. The first two of these compare favorably with the 25μs and 29μs obtained running on a current Nvidia GPU. Some broader significance is that this is a critical piece in implementing a large scale FPGA-based MD engine: even a single FPGA is capable of keeping the FFT off of the critical path for a large fraction of possible MD simulations.
TACO3D. 3-D Finite Element Heat Transfer Code
Mason, W.E.
1992-03-04
TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.
Unit cell geometry of 3-D braided structures
NASA Technical Reports Server (NTRS)
Du, Guang-Wu; Ko, Frank K.
1993-01-01
The traditional approach used in modeling of composites reinforced by three-dimensional (3-D) braids is to assume a simple unit cell geometry of a 3-D braided structure with known fiber volume fraction and orientation. In this article, we first examine 3-D braiding methods in the light of braid structures, followed by the development of geometric models for 3-D braids using a unit cell approach. The unit cell geometry of 3-D braids is identified and the relationship of structural parameters such as yarn orientation angle and fiber volume fraction with the key processing parameters established. The limiting geometry has been computed by establishing the point at which yarns jam against each other. Using this factor makes it possible to identify the complete range of allowable geometric arrangements for 3-D braided preforms. This identified unit cell geometry can be translated to mechanical models which relate the geometrical properties of fabric preforms to the mechanical responses of composite systems.
3D Geomodeling of the Venezuelan Andes
NASA Astrophysics Data System (ADS)
Monod, B.; Dhont, D.; Hervouet, Y.; Backé, G.; Klarica, S.; Choy, J. E.
2010-12-01
The crustal structure of the Venezuelan Andes is investigated thanks to a geomodel. The method integrates surface structural data, remote sensing imagery, crustal scale balanced cross-sections, earthquake locations and focal mechanism solutions to reconstruct fault surfaces at the scale of the mountain belt into a 3D environment. The model proves to be essential for understanding the basic processes of both the orogenic float and the tectonic escape involved in the Plio-Quaternary evolution of the orogen. The reconstruction of the Bocono and Valera faults reveals the 3D shape of the Trujillo block whose geometry can be compared to a boat bow floating over a mid-crustal detachment horizon emerging at the Bocono-Valera triple junction. Motion of the Trujillo block is accompanied by a generalized extension in the upper crust accommodated by normal faults with listric geometries such as for the Motatan, Momboy and Tuñame faults. Extension may be related to the lateral spreading of the upper crust, suggesting that gravity forces play an important role in the escape process.
Superfast 3D absolute shape measurement using five binary patterns
NASA Astrophysics Data System (ADS)
Hyun, Jae-Sang; Zhang, Song
2017-03-01
This paper presents a method that recovers high-quality 3D absolute coordinates point by point with only five binary patterns. Specifically, three dense binary dithered patterns are used to compute the wrapped phase; and the average intensity is combined with two additional binary patterns to determine fringe order pixel by pixel in phase domain. The wrapped phase is temporarily unwrapped point by point by referring to the fringe order. We further developed a computational framework to reduce random noise impact due to dithering, defocusing and random noise. Since only five binary fringe patterns are required to recover one 3D frame, extremely high speed 3D shape measurement can be achieved. For example, we developed a system that captures 2D images at 3333 Hz, and thus performs 3D shape measurement at 667 Hz.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Forensic 3D scene reconstruction
NASA Astrophysics Data System (ADS)
Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.
2000-05-01
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.
2013-01-01
Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.
van Geer, Erik; Molenbroek, Johan; Schreven, Sander; deVoogd-Claessen, Lenneke; Toussaint, Huib
2012-01-01
In competitive swimming, suits have become more important. These suits influence friction, pressure and wave drag. Friction drag is related to the surface properties whereas both pressure and wave drag are greatly influenced by body shape. To find a relationship between the body shape and the drag, the anthropometry of several world class female swimmers wearing different suits was accurately defined using a 3D scanner and traditional measuring methods. The 3D scans delivered more detailed information about the body shape. On the same day the swimmers did performance tests in the water with the tested suits. Afterwards the result of the performance tests and the differences found in body shape was analyzed to determine the deformation caused by a swimsuit and its effect on the swimming performance. Although the amount of data is limited because of the few test subjects, there is an indication that the deformation of the body influences the swimming performance.
Forensic 3D Scene Reconstruction
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.
1999-10-12
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
Belenkov, E. A. Ali-Pasha, V. A.
2011-01-15
The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.
Posada, R.; Daul, Ch.; Wolf, D.; Aletti, P.
2007-01-01
Conformal radiotherapy (CRT) results in high-precision tumor volume irradiation. In fractioned radiotherapy (FRT), lesions are irradiated in several sessions so that healthy neighbouring tissues are better preserved than when treatment is carried out in one fraction. In the case of intracranial tumors, classical methods of patient positioning in the irradiation machine coordinate system are invasive and only allow for CRT in one irradiation session. This contribution presents a noninvasive positioning method representing a first step towards the combination of CRT and FRT. The 3D data used for the positioning is point clouds spread over the patient's head (CT-data usually acquired during treatment) and points distributed over the patient's face which are acquired with a structured light sensor fixed in the therapy room. The geometrical transformation linking the coordinate systems of the diagnosis device (CT-modality) and the 3D sensor of the therapy room (visible light modality) is obtained by registering the surfaces represented by the two 3D point sets. The geometrical relationship between the coordinate systems of the 3D sensor and the irradiation machine is given by a calibration of the sensor position in the therapy room. The global transformation, computed with the two previous transformations, is sufficient to predict the tumor position in the irradiation machine coordinate system with only the corresponding position in the CT-coordinate system. Results obtained for a phantom show that the mean positioning error of tumors on the treatment machine isocentre is 0.4 mm. Tests performed with human data proved that the registration algorithm is accurate (0.1 mm mean distance between homologous points) and robust even for facial expression changes. PMID:18364992
[Real time 3D echocardiography
NASA Technical Reports Server (NTRS)
Bauer, F.; Shiota, T.; Thomas, J. D.
2001-01-01
Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.
[Evaluation of Motion Sickness Induced by 3D Video Clips].
Matsuura, Yasuyuki; Takada, Hiroki
2016-01-01
The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.
Quasi-3D Algorithm in Multi-scale Modeling Framework
NASA Astrophysics Data System (ADS)
Jung, J.; Arakawa, A.
2008-12-01
As discussed in the companion paper by Arakawa and Jung, the Quasi-3D (Q3D) Multi-scale Modeling Framework (MMF) is a 4D estimation/prediction framework that combines a GCM with a 3D anelastic vector vorticity equation model (VVM) applied to a Q3D network of horizontal grid points. This paper presents an outline of the recently revised Q3D algorithm and a highlight of the results obtained by application of the algorithm to an idealized model setting. The Q3D network of grid points consists of two sets of grid-point arrays perpendicular to each other. For a scalar variable, for example, each set consists of three parallel rows of grid points. Principal and supplementary predictions are made on the central and the two adjacent rows, respectively. The supplementary prediction is to allow the principal prediction be three-dimensional at least to the second-order accuracy. To accommodate a higher-order accuracy and to make the supplementary predictions formally three-dimensional, a few rows of ghost points are added at each side of the array. Values at these ghost points are diagnostically determined by a combination of statistical estimation and extrapolation. The basic structure of the estimation algorithm is determined in view of the global stability of Q3D advection. The algorithm is calibrated using the statistics of past data at and near the intersections of the two sets of grid- point arrays. Since the CRM in the Q3D MMF extends beyond individual GCM boxes, the CRM can be a GCM by itself. However, it is better to couple the CRM with the GCM because (1) the CRM is a Q3D CRM based on a highly anisotropic network of grid points and (2) coupling with a GCM makes it more straightforward to inherit our experience with the conventional GCMs. In the coupled system we have selected, prediction of thermdynamic variables is almost entirely done by the Q3D CRM with no direct forcing by the GCM. The coupling of the dynamics between the two components is through mutual
GPU-Accelerated Denoising in 3D (GD3D)
2013-10-01
The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.
Sequential assembly of 3D perfusable microfluidic hydrogels.
He, Jiankang; Zhu, Lin; Liu, Yaxiong; Li, Dichen; Jin, Zhongmin
2014-11-01
Bottom-up tissue engineering provides a promising way to recreate complex structural organizations of native organs in artificial constructs by assembling functional repeating modules. However, it is challenging for current bottom-up strategies to simultaneously produce a controllable and immediately perfusable microfluidic network in modularly assembled 3D constructs. Here we presented a bottom-up strategy to produce perfusable microchannels in 3D hydrogels by sequentially assembling microfluidic modules. The effects of agarose-collagen composition on microchannel replication and 3D assembly of hydrogel modules were investigated. The unique property of predefined microchannels in transporting fluids within 3D assemblies was evaluated. Endothelial cells were incorporated into the microfluidic network of 3D hydrogels for dynamic culture in a house-made bioreactor system. The results indicated that the sequential assembly method could produce interconnected 3D predefined microfluidic networks in optimized agarose-collagen hydrogels, which were fully perfusable and successfully functioned as fluid pathways to facilitate the spreading of endothelial cells. We envision that the presented method could be potentially used to engineer 3D vascularized parenchymal constructs by encapsulating primary cells in bulk hydrogels and incorporating endothelial cells in predefined microchannels.
3-D model-based vehicle tracking.
Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J
2005-10-01
This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.
PLOT3D- DRAWING THREE DIMENSIONAL SURFACES
NASA Technical Reports Server (NTRS)
Canright, R. B.
1994-01-01
PLOT3D is a package of programs to draw three-dimensional surfaces of the form z = f(x,y). The function f and the boundary values for x and y are the input to PLOT3D. The surface thus defined may be drawn after arbitrary rotations. However, it is designed to draw only functions in rectangular coordinates expressed explicitly in the above form. It cannot, for example, draw a sphere. Output is by off-line incremental plotter or online microfilm recorder. This package, unlike other packages, will plot any function of the form z = f(x,y) and portrays continuous and bounded functions of two independent variables. With curve fitting; however, it can draw experimental data and pictures which cannot be expressed in the above form. The method used is division into a uniform rectangular grid of the given x and y ranges. The values of the supplied function at the grid points (x, y) are calculated and stored; this defines the surface. The surface is portrayed by connecting successive (y,z) points with straight-line segments for each x value on the grid and, in turn, connecting successive (x,z) points for each fixed y value on the grid. These lines are then projected by parallel projection onto the fixed yz-plane for plotting. This program has been implemented on the IBM 360/67 with on-line CDC microfilm recorder.
Fallon FORGE 3D Geologic Model
Doug Blankenship
2016-03-01
An x,y,z scattered data file for the 3D geologic model of the Fallon FORGE site. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.
Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp
2014-09-10
Highly accurate weak lensing analysis is urgently required for planned cosmic shear observations. For this purpose we have eliminated various systematic noises in the measurement. The point-spread function (PSF) effect is one of them. A perturbative approach for correcting the PSF effect on the observed image ellipticities has been previously employed. Here we propose a new non-perturbative approach for PSF correction that avoids the systematic error associated with the perturbative approach. The new method uses an artificial image for measuring shear which has the same ellipticity as the lensed image. This is done by re-smearing the observed galaxy images and observed star images (PSF) with an additional smearing function to obtain the original lensed galaxy images. We tested the new method with simple simulated objects that have Gaussian or Sérsic profiles smeared by a Gaussian PSF with sufficiently large size to neglect pixelization. Under the condition of no pixel noise, it is confirmed that the new method has no systematic error even if the PSF is large and has a high ellipticity.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-01-01
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837
Vazquez, Alberto L.; Fukuda, Mitsuhiro; Crowley, Justin C.; Kim, Seong-Gi
2014-01-01
Hemodynamic responses are commonly used to map brain activity; however, their spatial limits have remained unclear because of the lack of a well-defined and malleable spatial stimulus. To examine the properties of neural activity and hemodynamic responses, multiunit activity, local field potential, cerebral blood volume (CBV)-sensitive optical imaging, and laser Doppler flowmetry were measured from the somatosensory cortex of transgenic mice expressing Channelrhodopsin-2 in cortex Layer 5 pyramidal neurons. The magnitude and extent of neural and hemodynamic responses were modulated using different photo-stimulation parameters and compared with those induced by somatosensory stimulation. Photo-stimulation-evoked spiking activity across cortical layers was similar to forelimb stimulation, although their activity originated in different layers. Hemodynamic responses induced by forelimb- and photo-stimulation were similar in magnitude and shape, although the former were slightly larger in amplitude and wider in extent. Altogether, the neurovascular relationship differed between these 2 stimulation pathways, but photo-stimulation-evoked changes in neural and hemodynamic activities were linearly correlated. Hemodynamic point spread functions were estimated from the photo-stimulation data and its full-width at half-maximum ranged between 103 and 175 µm. Therefore, submillimeter functional structures separated by a few hundred micrometers may be resolved using hemodynamic methods, such as optical imaging and functional magnetic resonance imaging. PMID:23761666
NASA Astrophysics Data System (ADS)
Wang, Jiayue; Shi, Jiaru; Huang, Wenhui; Tang, Chuanxiang
2017-02-01
Among all microfocus X-ray tubes, 1 MeV has remained a "gray zone" despite its universal application in radiation therapy and non-destructive testing. One challenge existing in fabricating 1 MeV microfocus X-ray tubes is beam broadening inside metal anodes, which limits the minimum focal spot size a system can obtain. In particular, a complete understanding of the intrinsic broadening process, i.e., the point-spread function (PSF) of X-ray targets is needed. In this paper, relationships between PSF and beam energy, target thickness and electron incidence angle were investigated via Monte Carlo simulation. Focal spot limits for both transmission- and reflection-type tungsten targets at 0.5, 1 and 1.5 MeV were calculated, with target thicknesses ranging from 1 μm to 2 cm. Transmission-type targets with thickness less than 5 μ m could achieve micrometer-scale spots while reflection-type targets exhibited superiority for spots larger than 100 μm . In addition, by demonstrating the spot variation at off-normal incidence, the role of unidirectional beam was explored in microfocus X-ray systems. We expect that these results can enable alternative designs to improve the focal spot limit of X-ray tubes and benefit accurate photon source modeling.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-02-13
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.
Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Bloom, E. D.; Borgland, A. W.; Bottacini, E.; Buehler, R.; Asano, K.; Atwood, W. B.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Ballet, J.; Bastieri, D.; Bonamente, E.; Brandt, T. J.; Brigida, M.; Bruel, P. E-mail: mar0@uw.edu [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS and others
2013-03-01
The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to detect photons with energies from Almost-Equal-To 20 MeV to >300 GeV. The pre-launch response functions of the LAT were determined through extensive Monte Carlo simulations and beam tests. The point-spread function (PSF) characterizing the angular distribution of reconstructed photons as a function of energy and geometry in the detector is determined here from two years of on-orbit data by examining the distributions of {gamma} rays from pulsars and active galactic nuclei (AGNs). Above 3 GeV, the PSF is found to be broader than the pre-launch PSF. We checked for dependence of the PSF on the class of {gamma}-ray source and observation epoch and found none. We also investigated several possible spatial models for pair-halo emission around BL Lac AGNs. We found no evidence for a component with spatial extension larger than the PSF and set upper limits on the amplitude of halo emission in stacked images of low- and high-redshift BL Lac AGNs and the TeV blazars 1ES0229+200 and 1ES0347-121.
NASA Astrophysics Data System (ADS)
Ambrosi, R. M.; Abbey, A. F.; Hutchinson, I. B.; Willingale, R.; Wells, A.; Short, A. D. T.; Campana, S.; Citterio, O.; Tagliaferri, G.; Burkert, W.; Brauninger, H.
2002-08-01
The optical components of the Swift X-ray telescope (XRT) are already developed items. They are the flight spare X-ray mirror from the JET-X/Spectrum-X program and an MOS CCD (CCD22) of the type currently operating in orbit as part of the EPIC focal plane camera on XMM-Newton (SPIE 4140 (2000) 64). The JET-X mirrors were first calibrated at the Max Plank Institute for Extraterrestrial Physics' (MPE) Panter facility, Garching, Germany in 1996 (SPIE 2805 (1996) 56; SPIE 3114 (1997) 392). Half-energy widths of 16arcsec at 1.5keV were confirmed for the two flight mirrors and the flight spare. The calibration of the flight spare was repeated at Panter in July 2000 in order to establish whether any changes had occurred during the 4yr that the mirror had been in storage at the OAB, Milan, Italy. The results reported in this paper confirm that the resolution of the JET-X mirrors has remained stable over this storage period. In an extension of this test program, the flight spare EPIC camera was installed at the focus of the JET-X mirror to simulate the optical system of the Swift XRT. Tolerances in the mirror focal length, the on-axis and off-axis point spread functions were measured and calibration data sets were used to obtain centroid positions of X-ray point sources. The results confirmed Swift's ability to determine the centroid positions of sources at 100mCrab brightness to better than 1arcsec and provided a calibration of the centroiding process as a function of source flux and off-axis angle. The presence of background events in the image frame introduced errors in the centroiding process and this was accounted for by reducing the sampling area used for the centroiding algorithm.
3D Imaging with Holographic Tomography
NASA Astrophysics Data System (ADS)
Sheppard, Colin J. R.; Kou, Shan Shan
2010-04-01
There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we
Light field display and 3D image reconstruction
NASA Astrophysics Data System (ADS)
Iwane, Toru
2016-06-01
Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.
3D Wilson cycle: structural inheritance and subduction polarity reversals
NASA Astrophysics Data System (ADS)
Beaussier, Stephane; Gerya, Taras; Burg, Jean-Pierre
2016-04-01
Many orogenies display along-strike variations in their orogenic wedge geometry. For instance, the Alps is an example of lateral changes in the subducting lithosphere polarity. High resolution tomography has shown that the southeast dipping European lithosphere is separated from the northeast dipping Adriatic lithosphere by a narrow transition zone at about the "Judicarian" line (Kissling et al. 2006). The formation of such 3D variations remains conjectural. We investigate the conditions that can spontaneously induce such lithospheric structures, and intend to identify the main parameters controlling their formation and geometry. Using the 3D thermo-mechanical code, I3ELVIS (Gerya and Yuen 2007) we modelled a Wilson cycle starting from a continental lithosphere in an extensional setting resulting in continental breakup and oceanic spreading. At a later stage, divergence is gradually reversed to convergence, which induce subduction of the oceanic lithosphere formed during oceanic spreading. In this model, all lateral and longitudinal structures of the lithospheres are generated self-consistently, and are consequences of the initial continental structure, tectono-magmatic inheritance, and material rheology. Our numerical simulations point out the control of rheological parameters defining the brittle/plastic yielding conditions for the lithosphere. Formation of several opposing domains of opposing subduction polarity is facilitated by wide and weak oceanic lithospheres. Furthermore, contrasts of strength between the continental and oceanic lithosphere, as well as the angle between the plate suture and the shortening direction have a second order effect on the lateral geometry of the subduction zone. In our numerical experiments systematic lateral changes in the subduction lithosphere polarity during subduction initiation form spontaneously suggesting intrinsic physical origin of this phenomenon. Further studies are necessary to understand why this feature, observed
High resolution 3D fluorescence tomography using ballistic photons
NASA Astrophysics Data System (ADS)
Zheng, Jie; Nouizi, Farouk; Cho, Jaedu; Kwong, Jessica; Gulsen, Gultekin
2015-03-01
We are developing a ballistic-photon based approach for improving the spatial resolution of fluorescence tomography using time-domain measurements. This approach uses early photon information contained in measured time-of-fight distributions originating from fluorescence emission. The time point spread functions (TPSF) from both excitation light and emission light are acquired with gated single photon Avalanche detector (SPAD) and time-correlated single photon counting after a short laser pulse. To determine the ballistic photons for reconstruction, the lifetime of the fluorophore and the time gate from the excitation profiles will be used for calibration, and then the time gate of the fluorescence profile can be defined by a simple time convolution. By mimicking first generation CT data acquisition, the sourcedetector pair will translate across and also rotate around the subject. The measurement from each source-detector position will be reshaped into a histogram that can be used by a simple back-projection algorithm in order to reconstruct high resolution fluorescence images. Finally, from these 2D sectioning slides, a 3D inclusion can be reconstructed accurately. To validate the approach, simulation of light transport is performed for biological tissue-like media with embedded fluorescent inclusion by solving the diffusion equation with Finite Element Method using COMSOL Multiphysics simulation. The reconstruction results from simulation studies have confirmed that this approach drastically improves the spatial resolution of fluorescence tomography. Moreover, all the results have shown the feasibility of this technique for high resolution small animal imaging up to several centimeters.
3D Nanostructuring of Semiconductors
NASA Astrophysics Data System (ADS)
Blick, Robert
2000-03-01
Modern semiconductor technology allows to machine devices on the nanometer scale. I will discuss the current limits of the fabrication processes, which enable the definition of single electron transistors with dimensions down to 8 nm. In addition to the conventional 2D patterning and structuring of semiconductors, I will demonstrate how to apply 3D nanostructuring techniques to build freely suspended single-crystal beams with lateral dimension down to 20 nm. In transport measurements in the temperature range from 30 mK up to 100 K these nano-crystals are characterized regarding their electronic as well as their mechanical properties. Moreover, I will present possible applications of these devices.
NASA Technical Reports Server (NTRS)
2004-01-01
This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.
NASA Astrophysics Data System (ADS)
Manos, Harry
2016-03-01
Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.
NASA Technical Reports Server (NTRS)
2004-01-01
This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.
Love, Lonnie
2015-01-09
ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.
Microseismic network design assessment based on 3D ray tracing
NASA Astrophysics Data System (ADS)
Näsholm, Sven Peter; Wuestefeld, Andreas; Lubrano-Lavadera, Paul; Lang, Dominik; Kaschwich, Tina; Oye, Volker
2016-04-01
There is increasing demand on the versatility of microseismic monitoring networks. In early projects, being able to locate any triggers was considered a success. These early successes led to a better understanding of how to extract value from microseismic results. Today operators, regulators, and service providers work closely together in order to find the optimum network design to meet various requirements. In the current study we demonstrate an integrated and streamlined network capability assessment approach. It is intended for use during the microseismic network design process prior to installation. The assessments are derived from 3D ray tracing between a grid of event points and the sensors. Three aspects are discussed: 1) Magnitude of completeness or detection limit; 2) Event location accuracy; and 3) Ground-motion hazard. The network capability parameters 1) and 2) are estimated at all hypothetic event locations and are presented in the form of maps given a seismic sensor coordinate scenario. In addition, the ray tracing traveltimes permit to estimate the point-spread-functions (PSFs) at the event grid points. PSFs are useful in assessing the resolution and focusing capability of the network for stacking-based event location and imaging methods. We estimate the performance for a hypothetical network case with 11 sensors. We consider the well-documented region around the San Andreas Fault Observatory at Depth (SAFOD) located north of Parkfield, California. The ray tracing is done through a detailed velocity model which covers a 26.2 by 21.2 km wide area around the SAFOD drill site with a resolution of 200 m both for the P-and S-wave velocities. Systematic network capability assessment for different sensor site scenarios prior to installation facilitates finding a final design which meets the survey objectives.
NASA Technical Reports Server (NTRS)
Wilkerson, Gary W.; Pitalo, Stephen K.
1999-01-01
Different secondary mirror support towers were modeled on the CODE V optical design/analysis program for the NGST Optical Telescope Assembly (OTA) B. The vertices of the NGST OTA B primary and secondary mirrors were separated by close to 9.0 m. One type of tower consisted of a hollow cone 6.0 m long, 2.00 m in diameter at the base, and 0.704 m in diameter at its top. The base of the cone was considered attached to the primary's reaction structure through a hole in the primary. Extending up parallel to the optical axis from the top of this cylinder were eight blades (pyramidal struts) 3.0 m long. A cross section of each these long blades was an isosceles triangle with a base of 0.010 m and a height of 0.100 m with the sharpest part of each triangle pointing inward. The eight struts occurred every 45 deg. The other type of tower was purely a hexapod arrangement and had no blades or cones. The hexapod consisted simply of six, very thin, circular struts, leaving in pairs at 12:00, 4:00, and 8:00 at the primary and traversing to the outer edge of the back of the secondary mount. At this mount, two struts arrived at each of 10:00, 2:00, and 6:00. The struts were attached to the primary mirror in a ring 3.5 m in diameter. They reached the back of the secondary mount, a circle 0.704 m in diameter. Transmittance analyses at two levels were performed on the secondary mirror support towers. Detailed transmittances were accomplished by the use of the CODE V optical design/analysis program and were compared to transmittance calculations that were almost back-of-the-envelope. Point spread function (PSF) calculations, including both diffraction and aberration effects, were performed on CODE V. As one goes out from the center of the blur (for a point source), the two types of support towers showed little difference between their PSF intensities until one reaches about the 3% level. Contours can be delineated on CODE V down to about 10 (exp -8) times the peak intensity, fine
Positional Awareness Map 3D (PAM3D)
NASA Technical Reports Server (NTRS)
Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise
2012-01-01
The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.
Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C
2013-06-12
The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.
3D Printable Graphene Composite
Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong
2015-01-01
In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673
Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.
2013-01-01
The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097
Martian terrain & airbags - 3D
NASA Technical Reports Server (NTRS)
1997-01-01
Portions of the lander's deflated airbags and a petal are at lower left in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.
Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right
Martian terrain & airbags - 3D
NASA Technical Reports Server (NTRS)
1997-01-01
Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.
Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right
3D structured illumination microscopy
NASA Astrophysics Data System (ADS)
Dougherty, William M.; Goodwin, Paul C.
2011-03-01
Three-dimensional structured illumination microscopy achieves double the lateral and axial resolution of wide-field microscopy, using conventional fluorescent dyes, proteins and sample preparation techniques. A three-dimensional interference-fringe pattern excites the fluorescence, filling in the "missing cone" of the wide field optical transfer function, thereby enabling axial (z) discrimination. The pattern acts as a spatial carrier frequency that mixes with the higher spatial frequency components of the image, which usually succumb to the diffraction limit. The fluorescence image encodes the high frequency content as a down-mixed, moiré-like pattern. A series of images is required, wherein the 3D pattern is shifted and rotated, providing down-mixed data for a system of linear equations. Super-resolution is obtained by solving these equations. The speed with which the image series can be obtained can be a problem for the microscopy of living cells. Challenges include pattern-switching speeds, optical efficiency, wavefront quality and fringe contrast, fringe pitch optimization, and polarization issues. We will review some recent developments in 3D-SIM hardware with the goal of super-resolved z-stacks of motile cells.
Lattice radial quantization: 3D Ising
NASA Astrophysics Data System (ADS)
Brower, R. C.; Fleming, G. T.; Neuberger, H.
2013-04-01
Lattice radial quantization is introduced as a nonperturbative method intended to numerically solve Euclidean conformal field theories that can be realized as fixed points of known Lagrangians. As an example, we employ a lattice shaped as a cylinder with a 2D Icosahedral cross-section to discretize dilatations in the 3D Ising model. Using the integer spacing of the anomalous dimensions of the first two descendants (l = 1, 2), we obtain an estimate for η = 0.034 (10). We also observed small deviations from integer spacing for the 3rd descendant, which suggests that a further improvement of our radial lattice action will be required to guarantee conformal symmetry at the Wilson-Fisher fixed point in the continuum limit.
George Mesina; Joshua Hykes
2005-09-01
The RELAP5-3D source code is unstructured with many interwoven logic flow paths. By restructuring the code, it becomes easier to read and understand, which reduces the time and money required for code development, debugging, and maintenance. A structured program is comprised of blocks of code with one entry and exit point and downward logic flow. IF tests and DO loops inherently create structured code, while GOTO statements are the main cause of unstructured code. FOR_STRUCT is a commercial software package that converts unstructured FORTRAN into structured programming; it was used to restructure individual subroutines. Primarily it transforms GOTO statements, ARITHMETIC IF statements, and COMPUTED GOTO statements into IF-ELSEIF-ELSE tests and DO loops. The complexity of RELAP5-3D complicated the task. First, FOR_STRUCT cannot completely restructure all the complex coding contained in RELAP5-3D. An iterative approach of multiple FOR_STRUCT applications gave some additional improvements. Second, FOR_STRUCT cannot restructure FORTRAN 90 coding, and RELAP5-3D is partially written in FORTRAN 90. Unix scripts for pre-processing subroutines into coding that FOR_STRUCT could handle and post-processing it back into FORTRAN 90 were written. Finally, FOR_STRUCT does not have the ability to restructure the RELAP5-3D code which contains pre-compiler directives. Variations of a file were processed with different pre-compiler options switched on or off, ensuring that every block of code was restructured. Then the variations were recombined to create a completely restructured source file. Unix scripts were written to perform these tasks, as well as to make some minor formatting improvements. In total, 447 files comprising some 180,000 lines of FORTRAN code were restructured. These showed significant reduction in the number of logic jumps contained as measured by reduction in the number of GOTO statements and line labels. The average number of GOTO statements per subroutine
Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.; Van Dokkum, Pieter G.; Bezanson, Rachel; Leja, Joel; Nelson, Erica J.; Oesch, Pascal; Brammer, Gabriel B.; Labbé, Ivo; Franx, Marijn; Fumagalli, Mattia; Van der Wel, Arjen; Da Cunha, Elisabete; Maseda, Michael V.; Förster Schreiber, Natascha; Kriek, Mariska; Lundgren, Britt F.; Magee, Daniel; Marchesini, Danilo; and others
2014-10-01
The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin{sup 2} in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands, and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu)
Experimental observation of 3-D, impulsive reconnection events in a laboratory plasma
Dorfman, S.; Ji, H.; Yamada, M.; Yoo, J.; Lawrence, E.; Myers, C.; Tharp, T. D.
2014-01-15
Fast, impulsive reconnection is commonly observed in laboratory, space, and astrophysical plasmas. In this work, impulsive, local, 3-D reconnection is identified for the first time in a laboratory current sheet. The two-fluid, impulsive reconnection events observed on the Magnetic Reconnection Experiment (MRX) [Yamada et al., Phys Plasmas 4, 1936 (1997)] cannot be explained by 2-D models and are therefore fundamentally three-dimensional. Several signatures of flux ropes are identified with these events; 3-D high current density regions with O-point structure form during a slow buildup period that precedes a fast disruption of the reconnecting current layer. The observed drop in the reconnection current and spike in the reconnection rate during the disruption are due to ejection of these flux ropes from the layer. Underscoring the 3-D nature of the events, strong out-of-plane gradients in both the density and reconnecting magnetic field are found to play a key role in this process. Electromagnetic fluctuations in the lower hybrid frequency range are observed to peak at the disruption time; however, they are not the key physics responsible for the impulsive phenomena observed. Important features of the disruption dynamics cannot be explained by an anomalous resistivity model. An important discrepancy in the layer width and force balance between the collisionless regime of MRX and kinetic simulations is also revisited. The wider layers observed in MRX may be due to the formation of flux ropes with a wide range of sizes; consistent with this hypothesis, flux rope signatures are observed down to the smallest scales resolved by the diagnostics. Finally, a 3-D two-fluid model is proposed to explain how the observed out-of-plane variation may lead to a localized region of enhanced reconnection that spreads in the direction of the out-of-plane electron flow, ejecting flux ropes from the layer in a 3-D manner.
3D Printing of Graphene Aerogels.
Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong
2016-04-06
3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.
A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude
Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi
2014-01-01
We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265
Quasi 3D dispersion experiment
NASA Astrophysics Data System (ADS)
Bakucz, P.
2003-04-01
This paper studies the problem of tracer dispersion in a coloured fluid flowing through a two-phase 3D rough channel-system in a 40 cm*40 cm plexi-container filled by homogen glass fractions and colourless fluid. The unstable interface between the driving coloured fluid and the colourless fluid develops viscous fingers with a fractal structure at high capillary number. Five two-dimensional fractal fronts have been observed at the same time using four cameras along the vertical side-walls and using one camera located above the plexi-container. In possession of five fronts the spatial concentration contours are determined using statistical models. The concentration contours are self-affine fractal curves with a fractal dimension D=2.19. This result is valid for disperison at high Péclet numbers.
Sinclair, Michael B
2012-01-05
ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.
Love, Lonnie
2016-11-02
ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energyâs Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a âplug-n-playâ laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.
NASA Technical Reports Server (NTRS)
2009-01-01
wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.
The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.
This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.
High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these
Kunert, Wolfgang; Storz, Pirmin; Kirschniak, Andreas
2013-02-01
The authors are grateful for the interesting perspectives given by Buchs and colleagues in their letter to the editor entitled "3D Laparoscopy: A Step Toward Advanced Surgical Navigation." Shutter-based 3D video systems failed to become established in the operating room in the late 1990s. To strengthen the starting conditions of the new 3D technology using better monitors and high definition, the authors give suggestions for its practical use in the clinical routine. But first they list the characteristics of single-channeled and bichanneled 3D laparoscopes and describe stereoscopic terms such as "comfort zone," "stereoscopic window," and "near-point distance." The authors believe it would be helpful to have the 3D pioneers assemble and share their experiences with these suggestions. Although this letter discusses "laparoscopy," it would also be interesting to collect experiences from other surgical disciplines, especially when one is considering whether to opt for bi- or single-channeled optics.
3D object recognition based on local descriptors
NASA Astrophysics Data System (ADS)
Jakab, Marek; Benesova, Wanda; Racev, Marek
2015-01-01
In this paper, we propose an enhanced method of 3D object description and recognition based on local descriptors using RGB image and depth information (D) acquired by Kinect sensor. Our main contribution is focused on an extension of the SIFT feature vector by the 3D information derived from the depth map (SIFT-D). We also propose a novel local depth descriptor (DD) that includes a 3D description of the key point neighborhood. Thus defined the 3D descriptor can then enter the decision-making process. Two different approaches have been proposed, tested and evaluated in this paper. First approach deals with the object recognition system using the original SIFT descriptor in combination with our novel proposed 3D descriptor, where the proposed 3D descriptor is responsible for the pre-selection of the objects. Second approach demonstrates the object recognition using an extension of the SIFT feature vector by the local depth description. In this paper, we present the results of two experiments for the evaluation of the proposed depth descriptors. The results show an improvement in accuracy of the recognition system that includes the 3D local description compared with the same system without the 3D local description. Our experimental system of object recognition is working near real-time.
Visual search is influenced by 3D spatial layout.
Finlayson, Nonie J; Grove, Philip M
2015-10-01
Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.
Performance Evaluation of 3d Modeling Software for Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Yanagi, H.; Chikatsu, H.
2016-06-01
UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.
PSH3D fast Poisson solver for petascale DNS
NASA Astrophysics Data System (ADS)
Adams, Darren; Dodd, Michael; Ferrante, Antonino
2016-11-01
Direct numerical simulation (DNS) of high Reynolds number, Re >= O (105) , turbulent flows requires computational meshes >= O (1012) grid points, and, thus, the use of petascale supercomputers. DNS often requires the solution of a Helmholtz (or Poisson) equation for pressure, which constitutes the bottleneck of the solver. We have developed a parallel solver of the Helmholtz equation in 3D, PSH3D. The numerical method underlying PSH3D combines a parallel 2D Fast Fourier transform in two spatial directions, and a parallel linear solver in the third direction. For computational meshes up to 81923 grid points, our numerical results show that PSH3D scales up to at least 262k cores of Cray XT5 (Blue Waters). PSH3D has a peak performance 6 × faster than 3D FFT-based methods when used with the 'partial-global' optimization, and for a 81923 mesh solves the Poisson equation in 1 sec using 128k cores. Also, we have verified that the use of PSH3D with the 'partial-global' optimization in our DNS solver does not reduce the accuracy of the numerical solution of the incompressible Navier-Stokes equations.
A novel window based method for approximating the Hausdorff in 3D range imagery.
Koch, Mark William
2004-10-01
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.
Generation and Comparison of Tls and SFM Based 3d Models of Solid Shapes in Hydromechanic Research
NASA Astrophysics Data System (ADS)
Zhang, R.; Schneider, D.; Strauß, B.
2016-06-01
The aim of a current study at the Institute of Hydraulic Engineering and Technical Hydromechanics at TU Dresden is to develop a new injection method for quick and economic sealing of dikes or dike bodies, based on a new synthetic material. To validate the technique, an artificial part of a sand dike was built in an experimental hall. The synthetic material was injected, which afterwards spreads in the inside of the dike. After the material was fully solidified, the surrounding sand was removed with an excavator. In this paper, two methods, which applied terrestrial laser scanning (TLS) and structure from motion (SfM) respectively, for the acquisition of a 3D point cloud of the remaining shapes are described and compared. Combining with advanced software packages, a triangulated 3D model was generated and subsequently the volume of vertical sections of the shape were calculated. As the calculation of the volume revealed differences between the TLS and the SfM 3D model, a thorough qualitative comparison of the two models will be presented as well as a detailed accuracy assessment. The main influence of the accuracy is caused by generalisation in case of gaps due to occlusions in the 3D point cloud. Therefore, improvements for the data acquisition with TLS and SfM for such kind of objects are suggested in the paper.
Improving automated 3D reconstruction methods via vision metrology
NASA Astrophysics Data System (ADS)
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Park, Kwan Kyu; Khuri-Yakub, Butrus T
2013-09-01
In this paper, we present an airborne 3-D volumetric imaging system based on capacitive micromachined ultrasonic transducers (CMUTs). For this purpose we fabricated 89-kHz CMUTs where each CMUT is made of a circular single-crystal silicon plate with a radius of 1mm and a thickness of 20 μm, which is actuated by electrostatic force through a 20-μm vacuum gap. The measured transmit sensitivity at 300-V DC bias is 14.6 Pa/V and 24.2 Pa/V, when excited by a 30-cycle burst and a continuous wave, respectively. The measured receive sensitivity at 300-V DC bias is 16.6 mV/Pa (-35.6 dB re 1 V/Pa) for a 30-cycle burst. A 26×26 2-D array was implemented by mechanical scanning a co-located transmitter and receiver using the classic synthetic aperture (CSA) method. The measurement of a 1.6λ-size target at a distance of 500 mm presented a lateral resolution of 3.17° and also showed good agreement with the theoretical point spread function. The 3-D imaging of two plates at a distance of 350 mm and 400 mm was constructed to exhibit the capability of the imaging system. This study experimentally demonstrates that a 2-D CMUT array can be used for practical 3-D imaging applications in air, such as a human-machine interface.
Stereoscopic 3D-scene synthesis from a monocular camera with an electrically tunable lens
NASA Astrophysics Data System (ADS)
Alonso, Julia R.
2016-09-01
3D-scene acquisition and representation is important in many areas ranging from medical imaging to visual entertainment application. In this regard, optical imaging acquisition combined with post-capture processing algorithms enable the synthesis of images with novel viewpoints of a scene. This work presents a new method to reconstruct a pair of stereoscopic images of a 3D-scene from a multi-focus image stack. A conventional monocular camera combined with an electrically tunable lens (ETL) is used for image acquisition. The captured visual information is reorganized considering a piecewise-planar image formation model with a depth-variant point spread function (PSF) along with the known focusing distances at which the images of the stack were acquired. The consideration of a depth-variant PSF allows the application of the method to strongly defocused multi-focus image stacks. Finally, post-capture perspective shifts, presenting each eye the corresponding viewpoint according to the disparity, are generated by simulating the displacement of a synthetic pinhole camera. The procedure is performed without estimation of the depth-map or segmentation of the in-focus regions. Experimental results for both real and synthetic data images are provided and presented as anaglyphs, but it could easily be implemented in 3D displays based in parallax barrier or polarized light.
NASA Astrophysics Data System (ADS)
Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella
2015-09-01
Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.
Visualization of 3-D tensor fields
NASA Technical Reports Server (NTRS)
Hesselink, L.
1996-01-01
Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.
3D electrohydrodynamic simulation of electrowetting displays
NASA Astrophysics Data System (ADS)
Hsieh, Wan-Lin; Lin, Chi-Hao; Lo, Kuo-Lung; Lee, Kuo-Chang; Cheng, Wei-Yuan; Chen, Kuo-Ching
2014-12-01
The fluid dynamic behavior within a pixel of an electrowetting display (EWD) is thoroughly investigated through a 3D simulation. By coupling the electrohydrodynamic (EHD) force deduced from the Maxwell stress tensor with the laminar phase field of the oil-water dual phase, the complete switch processes of an EWD, including the break-up and the electrowetting stages in the switch-on process (with voltage) and the oil spreading in the switch-off process (without voltage), are successfully simulated. By considering the factor of the change in the apparent contact angle at the contact line, the electro-optic performance obtained from the simulation is found to agree well with its corresponding experiment. The proposed model is used to parametrically predict the effect of interfacial (e.g. contact angle of grid) and geometric (e.g. oil thickness and pixel size) properties on the defects of an EWD, such as oil dewetting patterns, oil overflow, and oil non-recovery. With the help of the defect analysis, a highly stable EWD is both experimentally realized and numerically analyzed.
3D Hall MHD Reconnection Dynamics
NASA Astrophysics Data System (ADS)
Huba, J. D.; Rudakov, L.
2002-05-01
A 3D Hall MHD simulation code (VooDoo) has recently been developed at the Naval Research Laboratory. We present preliminary results of a fully 3D magnetic reconnection study using this code. The initial configuration of the plasma system is as follows. The ambient, reversed magnetic field is in the x-direction and is proportional to B0 tanh(y/Ly) where Ly is the scale length of the current sheet. Perturbation fields δ Bx and δ By are introduced to initiate the reconnection process. This initial configuration is similar to that used in the 2D GEM reconnection study. However, the perturbation fields are localized in the z-direction. We consider two cases: no guide field (Bz = 0) and a weak guide field (Bz = 0.1B0). We find that the reconnection process is not stationary in the z-direction but propagates in the B x ∇ n direction consistent with Hall drift physics. Hence, an asymmetric disruption of the current sheet ensues. The flow structure of the plasma in the vicinity of the X-point is complex. We find that the `neutral line' (i.e, along the z-direction) is not an ignorable coordinate and is not periodic in Hall MHD reconnection dynamics; two assumptions that are often made in reconnection studies. \\ Research supported by NASA and ONR
Greenberg, M.; Ebel, D.S.
2009-03-19
We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.
NASA Astrophysics Data System (ADS)
Hermanns, Maria
The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.
3D multiplexed immunoplasmonics microscopy
NASA Astrophysics Data System (ADS)
Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel
2016-07-01
Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed
Crowdsourcing Based 3d Modeling
NASA Astrophysics Data System (ADS)
Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.
2016-06-01
Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
NASA Astrophysics Data System (ADS)
Davis, A. B.
2015-12-01
Planetary atmospheres are made primarily of molecules, and their optical properties are well known. They scatter sunlight across the spectrum, but far more potently at shorter wavelengths. Consequently, they redden the Sun as it sets and, at the same time, endow the daytime sky with its characteristic blue hue. There are also microscopic atmospheric particulates that are equally omnipresent because small enough (up to ~10s of microns) to remain lofted for long periods of time. However, in contrast with molecules of the major gases, their concentrations are highly variable in space and time. Their optical properties are also far more interesting. These airborne particles are either solid---hence the word "aerosols"---or liquid, most notably in the form of cloud droplets. Needless to say that both aerosols and clouds have major impacts on the balance of the Earth's climate system. Harder to understand, but nonetheless true, is that their climate impacts are much harder to assess by Earth system modelers than those of greenhouse gases such as CO2. That makes them prime targets of study by multiple approaches, including ground- and space-based remote sensing. To characterize aerosols and clouds quantitatively by optical remote sensing methods, either passive (sunlight-based) or active (laser-based), we need predictive capability for the signals recorded by sensors, whether ground-based, airborne, or carried by satellites. This in turn draws on the physical theory of "radiative transfer" that describes how the light propagates and scatters in the molecular-and-particulate atmosphere. This is a challenge for remote sensing scientists. I will show why by simulating with simple means the point spread function or "PSF" of scattering particulate atmospheres with varying opacity, thus covering tabletop analogs of the pristine air, the background aerosol, all the way to optically thick cloudy airmasses. I will also show PSF measurements of real clouds over New Mexico and
Volumetric 3D Display System with Static Screen
NASA Technical Reports Server (NTRS)
Geng, Jason
2011-01-01
Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous
Photogrammetric 3D reconstruction using mobile imaging
NASA Astrophysics Data System (ADS)
Fritsch, Dieter; Syll, Miguel
2015-03-01
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
[3D emulation of epicardium dynamic mapping].
Lu, Jun; Yang, Cui-Wei; Fang, Zu-Xiang
2005-03-01
In order to realize epicardium dynamic mapping of the whole atria, 3-D graphics are drawn with OpenGL. Some source codes are introduced in the paper to explain how to produce, read, and manipulate 3-D model data.
An interactive multiview 3D display system
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui
2013-03-01
The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.
Laser Based 3D Volumetric Display System
1993-03-01
Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye