Science.gov

Sample records for 3d object detection

  1. Robust feature detection for 3D object recognition and matching

    NASA Astrophysics Data System (ADS)

    Pankanti, Sharath; Dorai, Chitra; Jain, Anil K.

    1993-06-01

    Salient surface features play a central role in tasks related to 3-D object recognition and matching. There is a large body of psychophysical evidence demonstrating the perceptual significance of surface features such as local minima of principal curvatures in the decomposition of objects into a hierarchy of parts. Many recognition strategies employed in machine vision also directly use features derived from surface properties for matching. Hence, it is important to develop techniques that detect surface features reliably. Our proposed scheme consists of (1) a preprocessing stage, (2) a feature detection stage, and (3) a feature integration stage. The preprocessing step selectively smoothes out noise in the depth data without degrading salient surface details and permits reliable local estimation of the surface features. The feature detection stage detects both edge-based and region-based features, of which many are derived from curvature estimates. The third stage is responsible for integrating the information provided by the individual feature detectors. This stage also completes the partial boundaries provided by the individual feature detectors, using proximity and continuity principles of Gestalt. All our algorithms use local support and, therefore, are inherently parallelizable. We demonstrate the efficacy and robustness of our approach by applying it to two diverse domains of applications: (1) segmentation of objects into volumetric primitives and (2) detection of salient contours on free-form surfaces. We have tested our algorithms on a number of real range images with varying degrees of noise and missing data due to self-occlusion. The preliminary results are very encouraging.

  2. 3-D Laser-Based Multiclass and Multiview Object Detection in Cluttered Indoor Scenes.

    PubMed

    Zhang, Xuesong; Zhuang, Yan; Hu, Huosheng; Wang, Wei

    2017-01-01

    This paper investigates the problem of multiclass and multiview 3-D object detection for service robots operating in a cluttered indoor environment. A novel 3-D object detection system using laser point clouds is proposed to deal with cluttered indoor scenes with a fewer and imbalanced training data. Raw 3-D point clouds are first transformed to 2-D bearing angle images to reduce the computational cost, and then jointly trained multiple object detectors are deployed to perform the multiclass and multiview 3-D object detection. The reclassification technique is utilized on each detected low confidence bounding box in the system to reduce false alarms in the detection. The RUS-SMOTEboost algorithm is used to train a group of independent binary classifiers with imbalanced training data. Dense histograms of oriented gradients and local binary pattern features are combined as a feature set for the reclassification task. Based on the dalian university of technology (DUT)-3-D data set taken from various office and household environments, experimental results show the validity and good performance of the proposed method.

  3. Pose detection of a 3D object using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2016-09-01

    The problem of 3D pose recognition of a rigid object is difficult to solve because the pose in a 3D space can vary with multiple degrees of freedom. In this work, we propose an accurate method for 3D pose estimation based on template matched filtering. The proposed method utilizes a bank of space-variant filters which take into account different pose states of the target and local statistical properties of the input scene. The state parameters of location coordinates, orientation angles, and scaling parameters of the target are estimated with high accuracy in the input scene. Experimental tests are performed for real and synthetic scenes. The proposed system yields good performance for 3D pose recognition in terms of detection efficiency, location and orientation errors.

  4. 4Pi fluorescence detection and 3D particle localization with a single objective

    PubMed Central

    Schnitzbauer, J.; McGorty, R.; Huang, B.

    2013-01-01

    Coherent detection through two opposing objectives (4Pi configuration) improves the precision of three-dimensional (3D) single-molecule localization substantially along the axial direction, but suffers from instrument complexity and maintenance difficulty. To address these issues, we have realized 4Pi fluorescence detection by sandwiching the sample between the objective and a mirror, and create interference of direct incidence and mirror-reflected signal at the camera with a spatial light modulator. Multifocal imaging using this single-objective mirror interference scheme offers improvement in the axial localization similar to the traditional 4Pi method. We have also devised several PSF engineering schemes to enable 3D localization with a single emitter image, offering better axial precision than normal single-objective localization methods such as astigmatic imaging. PMID:24105517

  5. Identification and Detection of Simple 3D Objects with Severely Blurred Vision

    PubMed Central

    Kallie, Christopher S.; Legge, Gordon E.; Yu, Deyue

    2012-01-01

    Purpose. Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. Methods. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10–24 feet, or 3.05–7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2–6 feet, or 0.61–1.83 m), and color (gray and white). Results. Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). Conclusions. When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed. PMID:23111613

  6. Applying Mean-Shift - Clustering for 3D object detection in remote sensing data

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Diederich, Malte; Troemel, Silke

    2013-04-01

    The timely warning and forecasting of high-impact weather events is crucial for life, safety and economy. Therefore, the development and improvement of methods for detection and nowcasting / short-term forecasting of these events is an ongoing research question. A new 3D object detection and tracking algorithm is presented. Within the project "object-based analysis and seamless predictin (OASE)" we address a better understanding and forecasting of convective events based on the synergetic use of remotely sensed data and new methods for detection, nowcasting, validation and assimilation. In order to gain advanced insight into the lifecycle of convective cells, we perform an object-detection on a new high-resolution 3D radar- and satellite based composite and plan to track the detected objects over time, providing us with a model of the lifecycle. The insights in the lifecycle will be used in order to improve prediction of convective events in the nowcasting time scale, as well as a new type of data to be assimilated into numerical weather models, thus seamlessly bridging the gap between nowcasting and NWP.. The object identification (or clustering) is performed using a technique borrowed from computer vision, called mean-shift clustering. Mean-Shift clustering works without many of the parameterizations or rigid threshold schemes employed by many existing schemes (e. g. KONRAD, TITAN, Trace-3D), which limit the tracking to fully matured, convective cells of significant size and/or strength. Mean-Shift performs without such limiting definitions, providing a wider scope for studying larger classes of phenomena and providing a vehicle for research into the object definition itself. Since the mean-shift clustering technique could be applied on many types of remote-sensing and model data for object detection, it is of general interest to the remote sensing and modeling community. The focus of the presentation is the introduction of this technique and the results of its

  7. Detection and Purging of Specular Reflective and Transparent Object Influences in 3d Range Measurements

    NASA Astrophysics Data System (ADS)

    Koch, R.; May, S.; Nüchter, A.

    2017-02-01

    3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It

  8. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  9. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  10. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  11. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  12. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  13. Recognizing 3D Object Using Photometric Invariant.

    DTIC Science & Technology

    1995-02-01

    model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the stability and...positions in the model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the...ognizing 3D objects. In our testing , it took only 0.2 seconds to derive corresponding positions in the model and the image for natural pictures. 2

  14. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal ( audio /image/video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...Many view digital watermarking as a potential solution for copyright protection of valuable digital materials like CD-quality audio , publication...watermark. The object can be an image, an audio clip, a video clip, or a 3D model. Some papers discuss watermarking other forms of multime- dia data

  15. Representation and classification of 3-D objects.

    PubMed

    Csakany, P; Wallace, A M

    2003-01-01

    This paper addresses the problem of generic object classification from three-dimensional depth or meshed data. First, surface patches are segmented on the basis of differential geometry and quadratic surface fitting. These are represented by a modified Gaussian image that includes the well-known shape index. Learning is an interactive process in which a human teacher indicates corresponding patches, but the formation of generic classes is unaided. Classification of unknown objects is based on the measurement of similarities between feature sets of the objects and the generic classes. The process is demonstrated on a group of three-dimensional (3-D) objects built from both CAD and laser-scanned depth data.

  16. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  17. Laser embedding electronics on 3D printed objects

    NASA Astrophysics Data System (ADS)

    Kirleis, Matthew A.; Simonson, Duane; Charipar, Nicholas A.; Kim, Heungsoo; Charipar, Kristin M.; Auyeung, Ray C. Y.; Mathews, Scott A.; Piqué, Alberto

    2014-03-01

    Additive manufacturing techniques such as 3D printing are able to generate reproductions of a part in free space without the use of molds; however, the objects produced lack electrical functionality from an applications perspective. At the same time, techniques such as inkjet and laser direct-write (LDW) can be used to print electronic components and connections onto already existing objects, but are not capable of generating a full object on their own. The approach missing to date is the combination of 3D printing processes with direct-write of electronic circuits. Among the numerous direct write techniques available, LDW offers unique advantages and capabilities given its compatibility with a wide range of materials, surface chemistries and surface morphologies. The Naval Research Laboratory (NRL) has developed various LDW processes ranging from the non-phase transformative direct printing of complex suspensions or inks to lase-and-place for embedding entire semiconductor devices. These processes have been demonstrated in digital manufacturing of a wide variety of microelectronic elements ranging from circuit components such as electrical interconnects and passives to antennas, sensors, actuators and power sources. At NRL we are investigating the combination of LDW with 3D printing to demonstrate the digital fabrication of functional parts, such as 3D circuits. Merging these techniques will make possible the development of a new generation of structures capable of detecting, processing, communicating and interacting with their surroundings in ways never imagined before. This paper shows the latest results achieved at NRL in this area, describing the various approaches developed for generating 3D printed electronics with LDW.

  18. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  19. 3D object recognition based on local descriptors

    NASA Astrophysics Data System (ADS)

    Jakab, Marek; Benesova, Wanda; Racev, Marek

    2015-01-01

    In this paper, we propose an enhanced method of 3D object description and recognition based on local descriptors using RGB image and depth information (D) acquired by Kinect sensor. Our main contribution is focused on an extension of the SIFT feature vector by the 3D information derived from the depth map (SIFT-D). We also propose a novel local depth descriptor (DD) that includes a 3D description of the key point neighborhood. Thus defined the 3D descriptor can then enter the decision-making process. Two different approaches have been proposed, tested and evaluated in this paper. First approach deals with the object recognition system using the original SIFT descriptor in combination with our novel proposed 3D descriptor, where the proposed 3D descriptor is responsible for the pre-selection of the objects. Second approach demonstrates the object recognition using an extension of the SIFT feature vector by the local depth description. In this paper, we present the results of two experiments for the evaluation of the proposed depth descriptors. The results show an improvement in accuracy of the recognition system that includes the 3D local description compared with the same system without the 3D local description. Our experimental system of object recognition is working near real-time.

  20. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  1. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  2. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  3. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  4. 3D object hiding using three-dimensional ptychography

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Wang, Zhibo; Li, Tuo; Pan, An; Wang, Yali; Shi, Yishi

    2016-09-01

    We present a novel technique for 3D object hiding by applying three-dimensional ptychography. Compared with 3D information hiding based on holography, the proposed ptychography-based hiding technique is easier to implement, because the reference beam and high-precision interferometric optical setup are not required. The acquisition of the 3D object and the ptychographic encoding process are performed optically. Owing to the introduction of probe keys, the security of the ptychography-based hiding system is significantly enhanced. A series of experiments and simulations demonstrate the feasibility and imperceptibility of the proposed method.

  5. 3D dimeron as a stable topological object

    NASA Astrophysics Data System (ADS)

    Yang, Shijie; Liu, Yongkai

    2015-03-01

    Searching for novel topological objects is always an intriguing task for scientists in various fields. We study a new three-dimensional (3D) topological structure called 3D dimeron in the trapped two-component Bose-Einstein condensates. The 3D dimeron differs to the conventional 3D skyrmion for the condensates hosting two interlocked vortex-rings. We demonstrate that the vortex-rings are connected by a singular string and the complexity constitutes a vortex-molecule. The stability is investigated through numerically evolving the Gross-Pitaevskii equations, giving a coherent Rabi coupling between the two components. Alternatively, we find that the stable 3D dimeron can be naturally generated from a vortex-free Gaussian wave packet via incorporating a synthetic non-Abelian gauge potential into the condensates. This work is supported by the NSF of China under Grant No. 11374036 and the National 973 program under Grant No. 2012CB821403.

  6. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  7. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  8. 3D object recognition in TOF data sets

    NASA Astrophysics Data System (ADS)

    Hess, Holger; Albrecht, Martin; Grothof, Markus; Hussmann, Stephan; Oikonomidis, Nikolaos; Schwarte, Rudolf

    2003-08-01

    In the last years 3D-Vision systems based on the Time-Of-Flight (TOF) principle have gained more importance than Stereo Vision (SV). TOF offers a direct depth-data acquisition, whereas SV involves a great amount of computational power for a comparable 3D data set. Due to the enormous progress in TOF-techniques, nowadays 3D cameras can be manufactured and be used for many practical applications. Hence there is a great demand for new accurate algorithms for 3D object recognition and classification. This paper presents a new strategy and algorithm designed for a fast and solid object classification. A challenging example - accurate classification of a (half-) sphere - demonstrates the performance of the developed algorithm. Finally, the transition from a general model of the system to specific applications such as Intelligent Airbag Control and Robot Assistance in Surgery are introduced. The paper concludes with the current research results in the above mentioned fields.

  9. Measuring the Visual Salience of 3D Printed Objects.

    PubMed

    Wang, Xi; Lindlbauer, David; Lessig, Christian; Maertens, Marianne; Alexa, Marc

    2016-01-01

    To investigate human viewing behavior on physical realizations of 3D objects, the authors use an eye tracker with scene camera and fiducial markers on 3D objects to gather fixations on the presented stimuli. They use this data to validate assumptions regarding visual saliency that so far have experimentally only been analyzed for flat stimuli. They provide a way to compare fixation sequences from different subjects and developed a model for generating test sequences of fixations unrelated to the stimuli. Their results suggest that human observers agree in their fixations for the same object under similar viewing conditions. They also developed a simple procedure to validate computational models for visual saliency of 3D objects and found that popular models of mesh saliency based on center surround patterns fail to predict fixations.

  10. A 3-D measurement system using object-oriented FORTH

    SciTech Connect

    Butterfield, K.B.

    1989-01-01

    Discussed is a system for storing 3-D measurements of points that relates the coordinate system of the measurement device to the global coordinate system. The program described here used object-oriented FORTH to store the measured points as sons of the measuring device location. Conversion of local coordinates to absolute coordinates is performed by passing messages to the point objects. Modifications to the object-oriented FORTH system are also described. 1 ref.

  11. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  12. Segmentation of 3D objects using live wire

    NASA Astrophysics Data System (ADS)

    Falcao, Alexandre X.; Udupa, Jayaram K.

    1997-04-01

    We have been developing user-steered image segmentation methods for situations which require considerable user assistance in object definition. In such situations, our segmentation methods aim (1) to provide effective control to the user on the segmentation process while it is being executed and (2) to minimize the total user's time required in the process. In the past, we have presented two paradigms, referred to as live wire and live lane, for segmenting 3D/4D object boundaries in a slice-by-slice fashion. In this paper, we introduce a 3D extension of the live wire approach which can further reduce the time spent by the user in the segmentation process. In 2D live wire, given a slice, for two specified points (pixel vertices) on the boundary of the object, the best boundary segment (as a set of oriented pixel edges) is the minimum-cost path between the two points. This segment is found via dynamic programming in real time as the user anchors the first point and moves the cursor to indicate the second point. A complete 2D boundary in this slice is identified as a set of consecutive boundary segments forming a 'closed,' 'connected,' 'oriented' contour. The strategy of the 3D extension is that, first, users specify contours via live- wiring on a few orthogonal slices. If these slices are selected strategically, then we have a sufficient number of points on the 3D boundary of the object to do live-wiring automatically on all axial slices of the 3D scene. Based on several validation studies involving segmentation of the bones of the foot in MR images, we found that the 3D extension of live wire is statistically significantly (p less than 0.0001) more repeatable and 2 - 6 times faster (p less than 0.01) than the 2D live wire method and 3 - 15 times faster than manual tracing.

  13. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units.

  14. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  15. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  16. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  17. Augmented Reality vs Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketomi, Takafumi; Sandor, Christian; Kato, Hirokazu

    2017-01-25

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5% on average compared to AR (p < 0:024). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3% slower in VR than in AR (p < 0:04). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  18. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  19. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  20. Research on 3-D device for infrared temperature detection

    NASA Astrophysics Data System (ADS)

    Chen, Shuxin; Jiang, Shaohua; Hou, Jie; Chen, Shuwang

    2007-12-01

    In a certain field, it is important to measure temperature information in variable direction at the same time. However, there are few instruments to accomplish the function now. To implement the measure in 3 dimensions, an experimental table of temperature detection by infrared is designed. It is the integration of detection, control and monitor. The infrared device in the table can detect and measure temperature in real time, and the three dimension electric motional device can adjust the detection distance by the user. The mechanical bar for displacement is controlled by a circuit with the control button. The infrared temperature sensor is fixed on the bar, so it can move along with the bar controlled by the circuit. The method of temperature detection is untouched, so it can detect small object and its tiny variable temperature, which can not be detected by the thermometer or the electronic temperature sensor. In terms of the 3-D parallel motion control, the device can implement temperature measurement in variable directions. According to the results of the temperature values, the 3-D temperature distributed curve can be described. By using of the detection device, temperature of some special objects can be detected, such as the live anatomical animal, small sensor, nondestructive object, and so on.

  1. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  2. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  3. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  4. Divided attention limits perception of 3-D object shapes.

    PubMed

    Scharff, Alec; Palmer, John; Moore, Cathleen M

    2013-02-12

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes.

  5. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  6. Fully automatic 3D digitization of unknown objects

    NASA Astrophysics Data System (ADS)

    Rozenwald, Gabriel F.; Seulin, Ralph; Fougerolle, Yohan D.

    2010-01-01

    This paper presents a complete system for 3D digitization of objects assuming no prior knowledge on its shape. The proposed methodology is applied to a digitization cell composed of a fringe projection scanner head, a robotic arm with 6 degrees of freedom (DoF), and a turntable. A two-step approach is used to automatically guide the scanning process. The first step uses the concept of Mass Vector Chains (MVC) to perform an initial scanning. The second step directs the scanner to remaining holes of the model. Post-processing of the data is also addressed. Tests with real objects were performed and results of digitization length in time and number of views are provided along with estimated surface coverage.

  7. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  8. Additive manufacturing. Continuous liquid interface production of 3D objects.

    PubMed

    Tumbleston, John R; Shirvanyants, David; Ermoshkin, Nikita; Janusziewicz, Rima; Johnson, Ashley R; Kelly, David; Chen, Kai; Pinschmidt, Robert; Rolland, Jason P; Ermoshkin, Alexander; Samulski, Edward T; DeSimone, Joseph M

    2015-03-20

    Additive manufacturing processes such as 3D printing use time-consuming, stepwise layer-by-layer approaches to object fabrication. We demonstrate the continuous generation of monolithic polymeric parts up to tens of centimeters in size with feature resolution below 100 micrometers. Continuous liquid interface production is achieved with an oxygen-permeable window below the ultraviolet image projection plane, which creates a "dead zone" (persistent liquid interface) where photopolymerization is inhibited between the window and the polymerizing part. We delineate critical control parameters and show that complex solid parts can be drawn out of the resin at rates of hundreds of millimeters per hour. These print speeds allow parts to be produced in minutes instead of hours.

  9. Optical 3D sensor for large objects in industrial application

    NASA Astrophysics Data System (ADS)

    Kuhmstedt, Peter; Heinze, Matthias; Himmelreich, Michael; Brauer-Burchardt, Christian; Brakhage, Peter; Notni, Gunther

    2005-06-01

    A new self calibrating optical 3D measurement system using fringe projection technique named "kolibri 1500" is presented. It can be utilised to acquire the all around shape of large objects. The basic measuring principle is the phasogrammetric approach introduced by the authors /1, 2/. The "kolibri 1500" consists of a stationary system with a translation unit for handling of objects. Automatic whole body measurement is achieved by using sensor head rotation and changeable object position, which can be done completely computer controlled. Multi-view measurement is realised by using the concept of virtual reference points. In this way no matching procedures or markers are necessary for the registration of the different images. This makes the system very flexible to realise different measurement tasks. Furthermore, due to self calibrating principle mechanical alterations are compensated. Typical parameters of the system are: the measurement volume extends from 400 mm up to 1500 mm max. length, the measurement time is between 2 min for 12 images up to 20 min for 36 images and the measurement accuracy is below 50μm.The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  10. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  11. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  12. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  13. Prediction models from CAD models of 3D objects

    NASA Astrophysics Data System (ADS)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  14. Speckle size of light scattered from 3D rough objects.

    PubMed

    Zhang, Geng; Wu, Zhensen; Li, Yanhui

    2012-02-13

    From scalar Helmholtz integral relation and by coordinate system transformation, this paper begins with a derivation of the far-zone speckle field in the observation plane perpendicular to the scattering direction from an arbitrarily shaped conducting rough object illuminated by a plane wave illumination, followed by the spatial correlation function of the speckle intensity to obtain the speckle size from the objects. Especially, the specific expressions for the speckle sizes of light backscattered from spheres, cylinders and cones are obtained in detail showing that the speckle size along one direction in the observation plane is proportional to the incident wavelength and the distance between the object and the observation plane, and is inverse proportional to the maximal illuminated dimension of the object parallel to the direction. In addition, the shapes of the speckle of the rough objects with different shapes are different. The investigation on the speckle size in this paper will be useful for the statistical properties of speckle from complicated rough objects and the speckle imaging to target detection and identification.

  15. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  16. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction.

    PubMed

    Sierra, Heidy; Brooks, Dana; DiMarzio, Charles

    2010-01-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  17. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  18. Learning 3D Object Templates by Quantizing Geometry and Appearance Spaces.

    PubMed

    Hu, Wenze; Zhu, Song-Chun

    2015-06-01

    While 3D object-centered shape-based models are appealing in comparison with 2D viewer-centered appearance-based models for their lower model complexities and potentially better view generalizabilities, the learning and inference of 3D models has been much less studied in the recent literature due to two factors: i) the enormous complexities of 3D shapes in geometric space; and ii) the gap between 3D shapes and their appearances in images. This paper aims at tackling the two problems by studying an And-Or Tree (AoT) representation that consists of two parts: i) a geometry-AoT quantizing the geometry space, i.e. the possible compositions of 3D volumetric parts and 2D surfaces within the volumes; and ii) an appearance-AoT quantizing the appearance space, i.e. the appearance variations of those shapes in different views. In this AoT, an And-node decomposes an entity into constituent parts, and an Or-node represents alternative ways of decompositions. Thus it can express a combinatorial number of geometry and appearance configurations through small dictionaries of 3D shape primitives and 2D image primitives. In the quantized space, the problem of learning a 3D object template is transformed to a structure search problem which can be efficiently solved in a dynamic programming algorithm by maximizing the information gain. We focus on learning 3D car templates from the AoT and collect a new car dataset featuring more diverse views. The learned car templates integrate both the shape-based model and the appearance-based model to combine the benefits of both. In experiments, we show three aspects: 1) the AoT is more efficient than the frequently used octree method in space representation; 2) the learned 3D car template matches the state-of-the art performances on car detection and pose estimation in a public multi-view car dataset; and 3) in our new dataset, the learned 3D template solves the joint task of simultaneous object detection, pose/view estimation, and part

  19. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  20. 3D genome structure modeling by Lorentzian objective function.

    PubMed

    Trieu, Tuan; Cheng, Jianlin

    2016-11-29

    The 3D structure of the genome plays a vital role in biological processes such as gene interaction, gene regulation, DNA replication and genome methylation. Advanced chromosomal conformation capture techniques, such as Hi-C and tethered conformation capture, can generate chromosomal contact data that can be used to computationally reconstruct 3D structures of the genome. We developed a novel restraint-based method that is capable of reconstructing 3D genome structures utilizing both intra-and inter-chromosomal contact data. Our method was robust to noise and performed well in comparison with a panel of existing methods on a controlled simulated data set. On a real Hi-C data set of the human genome, our method produced chromosome and genome structures that are consistent with 3D FISH data and known knowledge about the human chromosome and genome, such as, chromosome territories and the cluster of small chromosomes in the nucleus center with the exception of the chromosome 18. The tool and experimental data are available at https://missouri.box.com/v/LorDG.

  1. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  2. 3D shape shearography with integrated structured light projection for strain inspection of curved objects

    NASA Astrophysics Data System (ADS)

    Anisimov, Andrei G.; Groves, Roger M.

    2015-05-01

    Shearography (speckle pattern shearing interferometry) is a non-destructive testing technique that provides full-field surface strain characterization. Although real-life objects especially in aerospace, transport or cultural heritage are not flat (e.g. aircraft leading edges or sculptures), their inspection with shearography is of interest for both hidden defect detection and material characterization. Accurate strain measuring of a highly curved or free form surface needs to be performed by combining inline object shape measuring and processing of shearography data in 3D. Previous research has not provided a general solution. This research is devoted to the practical questions of 3D shape shearography system development for surface strain characterization of curved objects. The complete procedure of calibration and data processing of a 3D shape shearography system with integrated structured light projector is presented. This includes an estimation of the actual shear distance and a sensitivity matrix correction within the system field of view. For the experimental part a 3D shape shearography system prototype was developed. It employs three spatially-distributed shearing cameras, with Michelson interferometers acting as the shearing devices, one illumination laser source and a structured light projector. The developed system performance was evaluated with a previously reported cylinder specimen (length 400 mm, external diameter 190 mmm) loaded by internal pressure. Further steps for the 3D shape shearography prototype and the technique development are also proposed.

  3. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  4. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  5. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  6. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  7. Saliency detection for videos using 3D FFT local spectra

    NASA Astrophysics Data System (ADS)

    Long, Zhiling; AlRegib, Ghassan

    2015-03-01

    Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.

  8. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  9. 3D surface configuration modulates 2D symmetry detection.

    PubMed

    Chen, Chien-Chung; Sio, Lok-Teng

    2015-02-01

    We investigated whether three-dimensional (3D) information in a scene can affect symmetry detection. The stimuli were random dot patterns with 15% dot density. We measured the coherence threshold, or the proportion of dots that were the mirror reflection of the other dots in the other half of the image about a central vertical axis, at 75% accuracy with a 2AFC paradigm under various 3D configurations produced by the disparity between the left and right eye images. The results showed that symmetry detection was difficult when the corresponding dots across the symmetry axis were on different frontoparallel or inclined planes. However, this effect was not due to a difference in distance, as the observers could detect symmetry on a slanted surface, where the depth of the two sides of the symmetric axis was different. The threshold was reduced for a hinge configuration where the join of two slanted surfaces coincided with the axis of symmetry. Our result suggests that the detection of two-dimensional (2D) symmetry patterns is subject to the 3D configuration of the scene; and that coplanarity across the symmetry axis and consistency between the 2D pattern and 3D structure are important factors for symmetry detection.

  10. 3-D Object Pose Determination Using Complex EGI

    DTIC Science & Technology

    1990-10-01

    IKEG1 ji = 0. . .. 12 4.1 Tesselated pentakis dodecahedron ..... ....................... 19 4.2 First composite object used for testing... dodecahedron (tesselated pentakis dodecahedron ) as shown in Fig. 4.1. The normal direction space is discretized into 240 cells as well. The CEGI weights are...deviation of the error distribution.) 18 Figure 4. 1: Tesselated pentakis dodecahedron Figure 4.2: First composite object used for testing 19 Figure

  11. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  12. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  13. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  14. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  15. Aerial obstacle detection with 3-D mobile devices.

    PubMed

    Sáez, Juan Manuel; Escolano, Francisco; Lozano, Miguel Angel

    2015-01-01

    In this paper, we present a novel approach for aerial obstacle detection (e.g., branches or awnings) using a 3-D smartphone in the context of the visually impaired (VI) people assistance. This kind of obstacles are especially challenging because they cannot be detected by the walking stick or the guide dog.The algorithm captures the 3-D data of the scene through stereo vision. To our knowledge, this is the first work that presents a technology able to obtain real 3-D measures with smartphones in real time. The orientation sensors of the device (magnetometer and accelerometer) are used to approximate the walking direction of the user, in order to look for the obstacles only in such a direction. The obtained 3-D data are compressed and then linearized for detecting the potential obstacles. Potential obstacles are tracked in order to accumulate enough evidence to alert the user only when a real obstacle is found.In the experimental section, we show the results of the algorithm in several situations using real data and helped by VI users.

  16. Recognizing 3-D Objects Using 2-D Images

    DTIC Science & Technology

    1993-05-01

    N00014-91-J-4038, Army contract number DACA76-85-C-0010, and under Office of Naval Research contract N00014-85-K-0124. 4 Contents 1 Introduction 9 1.1...Features ...... ..................... 89 3.3 Conclusions ......... ................................ 90 5 6 CONTENTS 4 Building a Practical Indexing...should be considered joint work between the author and David Clemens. CONTENTS T 8 Conclusions 251 8. 1 Ge eral Object Recogiiitioin

  17. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  18. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  19. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  20. Blind robust watermarking schemes for copyright protection of 3D mesh objects.

    PubMed

    Zafeiriou, Stefanos; Tefas, Anastasios; Pitas, Ioannis

    2005-01-01

    In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, theta, phi) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles theta is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the theta domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.

  1. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  2. Reducing Non-Uniqueness in Satellite Gravity Inversion using 3D Object Oriented Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2013-12-01

    Non-uniqueness of satellite gravity interpretation has been usually reduced by using a priori information from various sources, e.g. seismic tomography models. The reduction in non-uniqueness has been based on velocity-density conversion formulas or user interpretation for 3D subsurface structures (objects) in seismic tomography models. However, these processes introduce additional uncertainty through the conversion relations due to the dependency on the other physical parameters such as temperature and pressure, or through the bias in the interpretation due to user choices and experience. In this research, a new methodology is introduced to extract the 3D subsurface structures from 3D geophysical data using a state-of-art 3D Object Oriented Image Analysis (OOA) technique. 3D OOA is tested using a set of synthetic models that simulate the real situation in the study area of this research. Then, 3D OOA is used to extract 3D subsurface objects from a real 3D seismic tomography model. The extracted 3D objects are used to reconstruct a forward model and its response is compared with the measured satellite gravity. Finally, the result of the forward modelling, based on the extracted 3D objects, is used to constrain the inversion process of satellite gravity data. Through this work, a new object-based approach is introduced to interpret and extract the 3D subsurface objects from 3D geophysical data. This can be used to constrain modelling and inversion of potential field data using the extracted 3D subsurface structures from other methods. In summary, a new approach is introduced to constrain inversion of satellite gravity measurements and enhance interpretation capabilities.

  3. Combining depth and gray images for fast 3D object recognition

    NASA Astrophysics Data System (ADS)

    Pan, Wang; Zhu, Feng; Hao, Yingming

    2016-10-01

    Reliable and stable visual perception systems are needed for humanoid robotic assistants to perform complex grasping and manipulation tasks. The recognition of the object and its precise 6D pose are required. This paper addresses the challenge of detecting and positioning a textureless known object, by estimating its complete 6D pose in cluttered scenes. A 3D perception system is proposed in this paper, which can robustly recognize CAD models in cluttered scenes for the purpose of grasping with a mobile manipulator. Our approach uses a powerful combination of two different camera technologies, Time-Of-Flight (TOF) and RGB, to segment the scene and extract objects. Combining the depth image and gray image to recognize instances of a 3D object in the world and estimate their 3D poses. The full pose estimation process is based on depth images segmentation and an efficient shape-based matching. At first, the depth image is used to separate the supporting plane of objects from the cluttered background. Thus, cluttered backgrounds are circumvented and the search space is extremely reduced. And a hierarchical model based on the geometry information of a priori CAD model of the object is generated in the offline stage. Then using the hierarchical model we perform a shape-based matching in 2D gray images. Finally, we validate the proposed method in a number of experiments. The results show that utilizing depth and gray images together can reach the demand of a time-critical application and reduce the error rate of object recognition significantly.

  4. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  5. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  6. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  7. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  8. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    ERIC Educational Resources Information Center

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  9. 3D objects enlargement technique using an optical system and multiple SLMs for electronic holography.

    PubMed

    Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori; Oi, Ryutaro; Kurita, Taiichiro

    2012-09-10

    One problem in electronic holography, which is caused by the display performance of spatial light modulators (SLM), is that the size of reconstructed 3D objects is small. Although methods for increasing the size using multiple SLMs have been considered, they typically had the problem that some parts of 3D objects were missing as a result of the gap between adjacent SLMs or 3D objects lost the vertical parallax. This paper proposes a method of resolving this problem by locating an optical system containing a lens array and other components in front of multiple SLMs. We used an optical system and 9 SLMs to construct a device equivalent to an SLM with approximately 74,600,000 pixels and used this to reconstruct 3D objects in both the horizontal and vertical parallax with an image size of 63 mm without losing any part of 3D objects.

  10. Non-destructive 3D shape measurement of transparent and black objects with thermal fringes

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2016-05-01

    Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.

  11. 3D facial landmark detection under large yaw and expression variations.

    PubMed

    Perakis, Panagiotis; Passalis, Georgios; Theoharis, Theoharis; Kakadiaris, Ioannis A

    2013-07-01

    A 3D landmark detection method for 3D facial scans is presented and thoroughly evaluated. The main contribution of the presented method is the automatic and pose-invariant detection of landmarks on 3D facial scans under large yaw variations (that often result in missing facial data), and its robustness against large facial expressions. Three-dimensional information is exploited by using 3D local shape descriptors to extract candidate landmark points. The shape descriptors include the shape index, a continuous map of principal curvature values of a 3D object's surface, and spin images, local descriptors of the object's 3D point distribution. The candidate landmarks are identified and labeled by matching them with a Facial Landmark Model (FLM) of facial anatomical landmarks. The presented method is extensively evaluated against a variety of 3D facial databases and achieves state-of-the-art accuracy (4.5-6.3 mm mean landmark localization error), considerably outperforming previous methods, even when tested with the most challenging data.

  12. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  13. Stratification approach for 3-D euclidean reconstruction of nonrigid objects from uncalibrated image sequences.

    PubMed

    Wang, Guanghui; Wu, Q M Jonathan

    2008-02-01

    This paper addresses the problem of 3-D reconstruction of nonrigid objects from uncalibrated image sequences. Under the assumption of affine camera and that the nonrigid object is composed of a rigid part and a deformation part, we propose a stratification approach to recover the structure of nonrigid objects by first reconstructing the structure in affine space and then upgrading it to the Euclidean space. The novelty and main features of the method lies in several aspects. First, we propose a deformation weight constraint to the problem and prove the invariability between the recovered structure and shape bases under this constraint. The constraint was not observed by previous studies. Second, we propose a constrained power factorization algorithm to recover the deformation structure in affine space. The algorithm overcomes some limitations of a previous singular-value-decomposition-based method. It can even work with missing data in the tracking matrix. Third, we propose to separate the rigid features from the deformation ones in 3-D affine space, which makes the detection more accurate and robust. The stratification matrix is estimated from the rigid features, which may relax the influence of large tracking errors in the deformation part. Extensive experiments on synthetic data and real sequences validate the proposed method and show improvements over existing solutions.

  14. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  15. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  16. Detecting 3d Non-Abelian Anyons via Adiabatic Cooling

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiji; Freedman, Michael; Yang, Kun

    2011-03-01

    Majorana fermions lie at the heart of a number of recent developments in condensed matter physics. One important application is the realization of non-abelian statistics and consequently a foundation for topological quantum computation. Theoretical propositions for Majorana systems abound, but experimental detection has proven challenging. Most attempts involve interferometry, but the degeneracy of the anyon state can be leveraged to produce a cooling effect, as previously shown in 2d. We apply this method of anyon detection to the 3d anyon model of Teo and Kane. Like the Fu-Kane model, this involves a hybrid system of topological insulator (TI) and superconductor (SC). The Majorana modes are localized to anisotropic hedgehogs in the order parameter which appear at the TI-SC interface. The effective model bears some resemblance to the non-Abelian Higgs model with scalar coupling as studied, for example, by Jackiw and Rebbi. In order to make concrete estimates relevant to experiments, we use parameters appropriate to Ca doped Bi 2 Se 3 as the topological insulator and Cu doped Bi 2 Se 3 as the superconductor. We find a temperature window in the milli-Kelvin regime where the presence of 3d non-abelian anyons will lead to an observable cooling effect.

  17. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  18. Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

    PubMed Central

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  19. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-12-15

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  20. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  1. Synthesis and display of dynamic holographic 3D scenes with real-world objects.

    PubMed

    Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea; Näsänen, Risto; Naughton, Thomas J; Ferraro, Pietro

    2010-04-26

    A 3D scene is synthesized combining multiple optically recorded digital holograms of different objects. The novel idea consists of compositing moving 3D objects in a dynamic 3D scene using a process that is analogous to stop-motion video. However in this case the movie has the exciting attribute that it can be displayed and observed in 3D. We show that 3D dynamic scenes can be projected as an alternative to complicated and heavy computations needed to generate realistic-looking computer generated holograms. The key tool for creating the dynamic action is based on a new concept that consists of a spatial, adaptive transformation of digital holograms of real-world objects allowing full control in the manipulation of the object's position and size in a 3D volume with very high depth-of-focus. A pilot experiment to evaluate how viewers perceive depth in a conventional single-view display of the dynamic 3D scene has been performed.

  2. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  3. Electrophysiological evidence of separate pathways for the perception of depth and 3D objects.

    PubMed

    Gao, Feng; Cao, Bihua; Cao, Yunfei; Li, Fuhong; Li, Hong

    2015-05-01

    Previous studies have investigated the neural mechanism of 3D perception, but the neural distinction between 3D-objects and depth processing remains unclear. In the present study, participants viewed three types of graphics (planar graphics, perspective drawings, and 3D objects) while event-related potentials (ERP) were recorded. The ERP results revealed the following: (1) 3D objects elicited a larger and delayed N1 component than the other two types of stimuli; (2) during the P2 time window, significant differences between 3D objects and the perspective drawings were found mainly over a group of electrode sites in the left lateral occipital region; and (3) during the N2 complex, differences between planar graphics and perspective drawings were found over a group of electrode sites in the right hemisphere, whereas differences between perspective drawings and 3D objects were observed at another group of electrode sites in the left hemisphere. These findings support the claim that depth processing and object identification might be processed by separate pathways and at different latencies.

  4. CLIP: similarity searching of 3D databases using clique detection.

    PubMed

    Rhodes, Nicholas; Willett, Peter; Calvet, Alain; Dunbar, James B; Humblet, Christine

    2003-01-01

    This paper describes a program for 3D similarity searching, called CLIP (for Candidate Ligand Identification Program), that uses the Bron-Kerbosch clique detection algorithm to find those structures in a file that have large structures in common with a target structure. Structures are characterized by the geometric arrangement of pharmacophore points and the similarity between two structures calculated using modifications of the Simpson and Tanimoto association coefficients. This modification takes into account the fact that a distance tolerance is required to ensure that pairs of interatomic distances can be regarded as equivalent during the clique-construction stage of the matching algorithm. Experiments with HIV assay data demonstrate the effectiveness and the efficiency of this approach to virtual screening.

  5. Depth representation of moving 3-D objects in apparent-motion path.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2008-01-01

    Apparent motion is perceived when two objects are presented alternately at different positions. The internal representations of apparently moving objects are formed in an apparent-motion path which lacks physical inputs. We investigated the depth information contained in the representation of 3-D moving objects in an apparent-motion path. We examined how probe objects-briefly placed in the motion path-affected the perceived smoothness of apparent motion. The probe objects comprised 3-D objects which were defined by being shaded or by disparity (convex/concave) or 2-D (flat) objects, while the moving objects were convex/concave objects. We found that flat probe objects induced a significantly smoother motion perception than concave probe objects only in the case of the convex moving objects. However, convex probe objects did not lead to smoother motion as the flat objects did, although the convex probe objects contained the same depth information for the moving objects. Moreover, the difference between probe objects was reduced when the moving objects were concave. These counterintuitive results were consistent in conditions when both depth cues were used. The results suggest that internal representations contain incomplete depth information that is intermediate between that of 2-D and 3-D objects.

  6. Plane-based optimization for 3D object reconstruction from single line drawings.

    PubMed

    Liu, Jianzhuang; Cao, Liangliang; Li, Zhenguo; Tang, Xiaoou

    2008-02-01

    In previous optimization-based methods of 3D planar-faced object reconstruction from single 2D line drawings, the missing depths of the vertices of a line drawing (and other parameters in some methods) are used as the variables of the objective functions. A 3D object with planar faces is derived by finding values for these variables that minimize the objective functions. These methods work well for simple objects with a small number N of variables. As N grows, however, it is very difficult for them to find expected objects. This is because with the nonlinear objective functions in a space of large dimension N, the search for optimal solutions can easily get trapped into local minima. In this paper, we use the parameters of the planes that pass through the planar faces of an object as the variables of the objective function. This leads to a set of linear constraints on the planes of the object, resulting in a much lower dimensional nullspace where optimization is easier to achieve. We prove that the dimension of this nullspace is exactly equal to the minimum number of vertex depths which define the 3D object. Since a practical line drawing is usually not an exact projection of a 3D object, we expand the nullspace to a larger space based on the singular value decomposition of the projection matrix of the line drawing. In this space, robust 3D reconstruction can be achieved. Compared with two most related methods, our method not only can reconstruct more complex 3D objects from 2D line drawings, but also is computationally more efficient.

  7. Programming self assembly by designing the 3D shape of floating objects

    NASA Astrophysics Data System (ADS)

    Poty, Martin; Lagubeau, Guillaume; Lumay, Geoffroy; Vandewalle, Nicolas

    2014-11-01

    Self-assembly of floating particles driven by capillary forces at some liquid-air interface leads to the formation of two-dimensionnal structures. Using a 3d printer, milimeter scale objets are produced. Their 3d shape is chosen in order to create capillary multipoles. The capillary interactions between these components can be either attractive or repulsive depending on the interface local deformations along the liquid-air interface. In order to understand how the shape of an object deforms the interface, we developed an original profilometry method. The measurements show that specific structures can be programmed by selecting the 3d branched shapes.

  8. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  9. Optimal Local Searching for Fast and Robust Textureless 3D Object Tracking in Highly Cluttered Backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2013-06-13

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  10. Optimal local searching for fast and robust textureless 3D object tracking in highly cluttered backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Hanhoon; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2014-01-01

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  11. Shaping functional nano-objects by 3D confined supramolecular assembly.

    PubMed

    Deng, Renhua; Liang, Fuxin; Li, Weikun; Liu, Shanqin; Liang, Ruijing; Cai, Mingle; Yang, Zhenzhong; Zhu, Jintao

    2013-12-20

    Nano-objects are generated through 3D confined supramolecular assembly, followed by a sequential disintegration by rupturing the hydrogen bonding. The shape of the nano-objects is tunable, ranging from nano-disc, nano-cup, to nano-toroid. The nano-objects are pH-responsive. Functional materials for example inorganic or metal nanoparticles are easily complexed onto the external surface, to extend both composition and microstructure of the nano-objects.

  12. 3D metamaterial absorber for attomole molecular detection (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tanaka, Takuo; Ishikawa, Atsushi

    2016-09-01

    3D Metamaterial absorber was used for a background-suppressed surface-enhanced molecular detection technique. By utilizing the resonant coupling of plasmonic modes of a metamaterial absorber and infrared (IR) vibrational modes of a self-assembled monolayer (SAM), attomole level molecular sensitivity was experimentally demonstrated. IR absorption spectroscopy of molecular vibrations is of importance in chemical, material, medical science and so on, since it provides essential information of the molecular structure, composition, and orientation. In the vibrational spectroscopic techniques, in addition to the weak signals from the molecules, strong background degrades the signal-to-noise ratio, and suppression of the background is crucial for the further improvement of the sensitivity. Here, we demonstrate low-background resonant Surface enhanced IR absorption (SEIRA) by using the metamaterial IR absorber that offers significant background suppression as well as plasmonic enhancement. The fabricated metamaterial consisted of 1D array of Au micro-ribbons on a thick Au film separated by a transparent gap layer made of MgF2. The surface structures were designed to exhibit an anomalous IR absorption at 3000 cm-1, which spectrally overlapped with C-H stretching vibrational modes. 16-Mercaptohexadecanoic acid (16-MHDA) was used as a test molecule, which formed a 2-nm thick SAM with their thiol head-group chemisorbed on the Au surface. In the FTIR measurements, the symmetric and asymmetric C-H stretching modes were clearly observed as reflection peaks within a broad plasmonic absorption of the metamaterial.

  13. Detection of Curved Robots using 3D Ultrasound.

    PubMed

    Ren, Hongliang; Vasilyev, Nikolay V; Dupont, Pierre E

    2011-09-25

    Three-dimensional ultrasound can be an effective imaging modality for image-guided interventions since it enables visualization of both the instruments and the tissue. For robotic applications, its realtime frame rates create the potential for image-based instrument tracking and servoing. These capabilities can enable improved instrument visualization, compensation for tissue motion as well as surgical task automation. Continuum robots, whose shape comprises a smooth curve along their length, are well suited for minimally invasive procedures. Existing techniques for ultrasound tracking, however, are limited to straight, laparoscopic-type instruments and thus are not applicable to continuum robot tracking. Toward the goal of developing tracking algorithms for continuum robots, this paper presents a method for detecting a robot comprised of a single constant curvature in a 3D ultrasound volume. Computational efficiency is achieved by decomposing the six-dimensional circle estimation problem into two sequential three-dimensional estimation problems. Simulation and experiment are used to evaluate the proposed method.

  14. Detection of Disease Symptoms on Hyperspectral 3d Plant Models

    NASA Astrophysics Data System (ADS)

    Roscher, Ribana; Behmann, Jan; Mahlein, Anne-Katrin; Dupuis, Jan; Kuhlmann, Heiner; Plümer, Lutz

    2016-06-01

    We analyze the benefit of combining hyperspectral images information with 3D geometry information for the detection of Cercospora leaf spot disease symptoms on sugar beet plants. Besides commonly used one-class Support Vector Machines, we utilize an unsupervised sparse representation-based approach with group sparsity prior. Geometry information is incorporated by representing each sample of interest with an inclination-sorted dictionary, which can be seen as an 1D topographic dictionary. We compare this approach with a sparse representation based approach without geometry information and One-Class Support Vector Machines. One-Class Support Vector Machines are applied to hyperspectral data without geometry information as well as to hyperspectral images with additional pixelwise inclination information. Our results show a gain in accuracy when using geometry information beside spectral information regardless of the used approach. However, both methods have different demands on the data when applied to new test data sets. One-Class Support Vector Machines require full inclination information on test and training data whereas the topographic dictionary approach only need spectral information for reconstruction of test data once the dictionary is build by spectra with inclination.

  15. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  16. Surveillance, detection, and 3D infrared tracking of bullets, rockets, mortars, and artillery

    NASA Astrophysics Data System (ADS)

    Leslie, Daniel H.; Hyman, Howard; Moore, Fritz; Squire, Mark D.

    2001-09-01

    We describe test results using the FIRST (Fast InfraRed Sniper Tracker) to detect, track, and range to bullets in flight for determining the location of the bullet launch point. The technology developed for the FIRST system can be used to provide detection and accurate 3D track data for other small threat objects including rockets, mortars, and artillery in addition to bullets. We discuss the radiometry and detection range for these objects, and discuss the trade-offs involved in design of the very fast optical system for acquisition, tracking, and ranging of these targets.

  17. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  18. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  19. Identification of superficial defects in reconstructed 3D objects using phase-shifting fringe projection

    NASA Astrophysics Data System (ADS)

    Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.

    2016-09-01

    3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.

  20. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  1. Contribution of 3-D electrical resistivity tomography for landmines detection

    NASA Astrophysics Data System (ADS)

    Metwaly, M.; El-Qady, G.; Matsushima, J.; Szalai, S.; Al-Arifi, N. S. N.; Taha, A.

    2008-12-01

    Landmines are a type of inexpensive weapons widely used in the pre-conflicted areas in many countries worldwide. The two main types are the metallic and non-metallic (mostly plastic) landmines. They are most commonly investigated by magnetic, ground penetrating radar (GPR), and metal detector (MD) techniques. These geophysical techniques however have significant limitations in resolving the non-metallic landmines and wherever the host materials are conductive. In this work, the 3-D electric resistivity tomography (ERT) technique is evaluated as an alternative and/or confirmation detection system for both landmine types, which are buried in different soil conditions and at different depths. This can be achieved using the capacitive resistivity imaging system, which does not need direct contact with the ground surface. Synthetic models for each case have been introduced using metallic and non-metallic bodies buried in wet and dry environments. The inversion results using the L1 norm least-squares optimization method tend to produce robust blocky models of the landmine body. The dipole axial and the dipole equatorial arrays tend to have the most favorable geometry by applying dynamic capacitive electrode and they show significant signal strength for data sets with up to 5% noise. Increasing the burial depth relative to the electrode spacing as well as the noise percentage in the resistivity data is crucial in resolving the landmines at different environments. The landmine with dimension and burial depth of one electrode separation unit is over estimated while the spatial resolutions decrease as the burial depth and noise percentage increase.

  2. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  3. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.

  4. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  5. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  6. 3D-modeling of deformed halite hopper crystals by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-12-01

    Object Based Image Analysis (OBIA) is an established method for analyzing multiscale and multidimensional imagery in a range of disciplines. In the present study this method was used for the 3D reconstruction of halite hopper crystals in a mudrock sample, based on Computed Tomography data. To quantitatively assess the reliability of OBIA results, they were benchmarked against a corresponding "gold standard", a reference 3D model of the halite crystals that was derived by manual expert digitization of the CT images. For accuracy assessment, classical per-scene statistics were extended to per-object statistics. The strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. Using a support vector machine (SVM) classifier on top of OBIA, unsuitable objects like halite crystal clusters, polyhalite-coated crystals and spherical halite crystals were effectively dismissed, but simultaneously the number of well-shaped halites was reduced.

  7. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  8. 3D models automatic reconstruction of selected close range objects. (Polish Title: Automatyczna rekonstrukcja modeli 3D małych obiektów bliskiego zasiegu)

    NASA Astrophysics Data System (ADS)

    Zaweiska, D.

    2013-12-01

    Reconstruction of three-dimensional, realistic models of objects from digital images has been the topic of research in many areas of science for many years. This development is stimulated by new technologies and tools, which appeared recently, such as digital photography, laser scanners, increase in the equipment efficiency and Internet. The objective of this paper is to present results of automatic modeling of selected close range objects, with the use of digital photographs acquired by the Hasselblad H4D50 camera. The author's software tool was utilized for calculations; it performs successive stages of the 3D model creation. The modeling process was presented as the complete process which starts from acquisition of images and which is completed by creation of a photorealistic 3D model in the same software environment. Experiments were performed for selected close range objects, with appropriately arranged image geometry, creating a ring around the measured object. The Area Base Matching (CC/LSM) method, the RANSAC algorithm, with the use of tensor calculus, were utilized form automatic matching of points detected with the SUSAN algorithm. Reconstruction of the surface of model generation is one of the important stages of 3D modeling. Reconstruction of precise surfaces, performed on the basis of a non-organized cloud of points, acquired from automatic processing of digital images, is a difficult task, which has not been finally solved. Creation of poly-angular models, which may meet high requirements concerning modeling and visualization is required in many applications. The polynomial method is usually the best way to precise representation of measurement results, and, at the same time, to achieving the optimum description of the surface. Three algorithm were tested: the volumetric method (VCG), the Poisson method and the Ball pivoting method. Those methods are mostly applied to modeling of uniform grids of points. Results of experiments proved that incorrect

  9. Blind Search of Faint Moving Objects in 3D Data Sets

    DTIC Science & Technology

    2013-09-01

    Blind Search of Faint Moving Objects in 3D Data Sets Phan Dao*, Peter Crabtree and Patrick McNicholl AFRL/RVBYC Tamar Payne Applied...using a simulated object signature superimposed on measured background, and show that the limiting magnitude can be improved by up to 6 visual...magnitudes. A quasi blind search algorithm that identifies the streak of photons, assuming no prior knowledge of orbital information, will be discussed

  10. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  11. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... New Ways to Detect Colon Cancer 3-D virtual screening now being used Past Issues / Spring 2009 ... showcases a 3-D image generated by the virtual colonoscopy software he invented with a team of ...

  12. Ex vivo evaluation of new 2D and 3D dental radiographic technology for detecting caries

    PubMed Central

    Tyndall, Donald; Mol, André; Everett, Eric T; Bangdiwala, Ananta

    2016-01-01

    Objectives: Proximal dental caries remains a prevalent disease with only modest detection rates by current diagnostic systems. Many new systems are available without controlled validation of diagnostic efficacy. The objective of this study was to evaluate the diagnostic efficacy of three potentially promising new imaging systems. Methods: This study evaluated the caries detection efficacy of Schick 33 (Sirona Dental, Salzburg, Austria) intraoral digital detector images employing an advanced sharpening filter, Planmeca ProMax® (Planmeca Inc., Helsinki, Finland) extraoral “panoramic bitewing” images and Sirona Orthophos XG3D (Sirona Dental) CBCT images with advanced artefact reduction. Conventional photostimulable phosphor images served as the control modality. An ex vivo study design using extracted human teeth, ten expert observers and micro-CT ground truth was employed. Results: Receiver operating characteristic analysis indicated similar diagnostic efficacy of all systems (ANOVA p > 0.05). The sensitivity of the Schick 33 images (0.48) was significantly lower than the other modalities (0.53–0.62). The specificity of the Planmeca images (0.86) was significantly lower than Schick 33 (0.96) and XG3D (0.97). The XG3D showed significantly better cavitation detection sensitivity (0.62) than the other modalities (0.48–0.57). Conclusions: The Schick 33 images demonstrated reduced caries sensitivity, whereas the Planmeca panoramic bitewing images demonstrated reduced specificity. XG3D with artefact reduction demonstrated elevated sensitivity and specificity for caries detection, improved depth accuracy and substantially improved cavitation detection. Care must be taken to recognize potential false-positive caries lesions with Planmeca panoramic bitewing images. Use of CBCT for caries detection must be carefully balanced with the presence of metal artefacts, time commitment, financial cost and radiation dose. PMID:26670605

  13. Combining scale-space and similarity-based aspect graphs for fast 3D object recognition.

    PubMed

    Ulrich, Markus; Wiedemann, Christian; Steger, Carsten

    2012-10-01

    This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms.

  14. The effect of background and illumination on color identification of real, 3D objects

    PubMed Central

    Allred, Sarah R.; Olkkonen, Maria

    2013-01-01

    For the surface reflectance of an object to be a useful cue to object identity, judgments of its color should remain stable across changes in the object's environment. In 2D scenes, there is general consensus that color judgments are much more stable across illumination changes than background changes. Here we investigate whether these findings generalize to real 3D objects. Observers made color matches to cubes as we independently varied both the illumination impinging on the cube and the 3D background of the cube. As in 2D scenes, we found relatively high but imperfect stability of color judgments under an illuminant shift. In contrast to 2D scenes, we found that background had little effect on average color judgments. In addition, variability of color judgments was increased by an illuminant shift and decreased by embedding the cube within a background. Taken together, these results suggest that in real 3D scenes with ample cues to object segregation, the addition of a background may improve stability of color identification. PMID:24273521

  15. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  16. Appearance learning for 3D pose detection of a satellite at close-range

    NASA Astrophysics Data System (ADS)

    Oumer, Nassir W.; Kriegel, Simon; Ali, Haider; Reinartz, Peter

    2017-03-01

    In this paper we present a learning-based 3D detection of a highly challenging specular object exposed to a direct sunlight at very close-range. An object detection is one of the most important areas of image processing, and can also be used for initialization of local visual tracking methods. While the object detection in 3D space is generally a difficult problem, it poses more difficulties when the object is specular and exposed to the direct sunlight as in a space environment. Our solution to a such problem relies on an appearance learning of a real satellite mock-up based on a vector quantization and the vocabulary tree. Our method, implemented on a standard computer (CPU), exploits a full perspective projection model and provides near real-time 3D pose detection of a satellite for close-range approach and manipulation. The time consuming part of the training (feature description, building the vocabulary tree and indexing, depth buffering and back-projection) are performed offline, while a fast image retrieval and 3D-2D registration are performed on-line. In contrast, the state of the art image-based 3D pose detection methods are slower on CPU or assume a weak perspective camera projection model. In our case the dimension of the satellite is larger than the distance to the camera, hence the assumption of the weak perspective model does not hold. To evaluate the proposed method, the appearance of a full scale mock-up of the rear part of the TerraSAR-X satellite is trained under various illumination and camera views. The training images are captured with a camera mounted on six degrees of freedom robot, which enables to position the camera in a desired view, sampled over a sphere. The views that are not within the workspace of the robot are interpolated using image-based rendering. Moreover, we generate ground truth poses to verify the accuracy of the detection algorithm. The achieved results are robust and accurate even under noise due to specular reflection

  17. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  18. Cryo-EM structure of a 3D DNA-origami object

    PubMed Central

    Bai, Xiao-chen; Martin, Thomas G.; Scheres, Sjors H. W.; Dietz, Hendrik

    2012-01-01

    A key goal for nanotechnology is to design synthetic objects that may ultimately achieve functionalities known today only from natural macromolecular complexes. Molecular self-assembly with DNA has shown potential for creating user-defined 3D scaffolds, but the level of attainable positional accuracy has been unclear. Here we report the cryo-EM structure and a full pseudoatomic model of a discrete DNA object that is almost twice the size of a prokaryotic ribosome. The structure provides a variety of stable, previously undescribed DNA topologies for future use in nanotechnology and experimental evidence that discrete 3D DNA scaffolds allow the positioning of user-defined structural motifs with an accuracy that is similar to that observed in natural macromolecules. Thereby, our results indicate an attractive route to fabricate nanoscale devices that achieve complex functionalities by DNA-templated design steered by structural feedback. PMID:23169645

  19. A novel iterative computation algorithm for Kinoform of 3D object

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-yu; Chuang, Pei; Wang, Xi; Zong, Yantao

    2012-11-01

    A novel method for computing kinoform of 3D object based on traditional iterate Fourier transform algorithm(IFTA) is proposed in this paper. Kinoform is a special kind of computer-generated holograms (CGH) which has very high diffraction efficiency since it only modulates the phase of illuminated light and doesn't have cross-interference from conjugate image. The traditional IFTA arithmetic assumes that reconstruction image is in infinity area(Fraunhofer diffraction region), and ignores the deepness of 3D object ,so it can only calculate two-dimensional kinoform. The proposed algorithm in this paper divides three-dimensional object into several object planes in deepness and treat every object plane as a target image then iterate computation is carried out between one input plane(kinoform) and multi-output planes(reconstruction images) .A space phase factor is added into iterate process to represent depth characters of 3D object, then reconstruction images is in Fresnel diffraction region. Optics reconstructed experiment of kinoform computed by this method is realized based on Liquid Crystals on Silicon (LCoS) Spatial Light Modulator(SLM). Mean Square Error(MSE) and Structure Similarity(SSIM) between original and reconstruction image is used to evaluate this method. The experimental result shows that this algorithm speed is fast and the result kinoform can reconstruct the object in different plane with high precision under the illumination of plane wave. The reconstruction images provide space sense of three-dimensional visual effect. At last, the influence of space and shelter between different object planes to reconstruction image is also discussed in the experiment.

  20. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  1. Artificial neural networks and model-based recognition of 3-D objects from 2-D images

    NASA Astrophysics Data System (ADS)

    Chao, Chih-Ho; Dhawan, Atam P.

    1992-09-01

    A computer vision system is developed for 3-D object recognition using artificial neural networks and a knowledge-based top-down feedback analysis system. This computer vision system can adequately analyze an incomplete edge map provided by a low-level processor for 3-D representation and recognition using key features. The key features are selected using a priority assignment and then used in an artificial neural network for matching with model key features. The result of such matching is utilized in generating the model-driven top-down feedback analysis. From the incomplete edge map we try to pick a candidate pattern utilizing the key feature priority assignment. The highest priority is given for the most connected node and associated features. The features are space invariant structures and sets of orientation for edge primitives. These features are now mapped into real numbers. A Hopfield network is then applied with two levels of matching to reduce the search time. The first match is to choose the class of possible model, the second match is then to find the model closest to the data patterns. This model is then rotated in 3-D to find the best match with the incomplete edge patterns and to provide the additional features in 3-D. In the case of multiple objects, a dynamically interconnected search strategy is designed to recognize objects using one pattern at a time. This strategy is also useful in recognizing occluded objects. The experimental results presented show the capability and effectiveness of this system.

  2. Pedestrian and car detection and classification for unmanned ground vehicle using 3D lidar and monocular camera

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Baeg, Seung-Ho; Lee, Kimin; Lee, Hae Seok; Park, SangDeok

    2011-05-01

    This paper describes an object detection and classification method for an Unmanned Ground Vehicle (UGV) using a range sensor and an image sensor. The range sensor and the image sensor are a 3D Light Detection And Ranging (LIDAR) sensor and a monocular camera, respectively. For safe driving of the UGV, pedestrians and cars should be detected on their moving routes of the vehicle. An object detection and classification techniques based on only a camera has an inherent problem. On the view point of detection with a camera, a certain algorithm should extract features and compare them with full input image data. The input image has a lot of information as object and environment. It is hard to make a decision of the classification. The image should have only one reliable object information to solve the problem. In this paper, we introduce a developed 3D LIDAR sensor and apply a fusion method both 3D LIDAR data and camera data. We describe a 3D LIDAR sensor which is developed by LG Innotek Consortium in Korea, named KIDAR-B25. The 3D LIDAR sensor detects objects, determines the object's Region of Interest (ROI) based on 3D information and sends it into a camera region for classification. In the 3D LIDAR domain, we recognize breakpoints using Kalman filter and then make a cluster using a line segment method to determine an object's ROI. In the image domain, we extract the object's feature data from the ROI region using a Haar-like feature method. Finally it is classified as a pedestrian or car using a trained database with an Adaboost algorithm. To verify our system, we make an experiment on the performance of our system which is mounted on a ground vehicle, through field tests in an urban area.

  3. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  4. Statistical and neural network classifiers in model-based 3-D object recognition

    NASA Astrophysics Data System (ADS)

    Newton, Scott C.; Nutter, Brian S.; Mitra, Sunanda

    1991-02-01

    For autonomous machines equipped with vision capabilities and in a controlled environment 3-D model-based object identification methodologies will in general solve rigid body recognition problems. In an uncontrolled environment however several factors pose difficulties for correct identification. We have addressed the problem of 3-D object recognition using a number of methods including neural network classifiers and a Bayesian-like classifier for matching image data with model projection-derived data [1 21. Neural network classifiers used began operation as simple feature vector classifiers. However unmodelled signal behavior was learned with additional samples yielding great improvement in classification rates. The model analysis drastically shortened training time of both classification systems. In an environment where signal behavior is not accurately modelled two separate forms of learning give the systems the ability to update estimates of this behavior. Required of course are sufficient samples to learn this new information. Given sufficient information and a well-controlled environment identification of 3-D objects from a limited number of classes is indeed possible. 1.

  5. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Astrophysics Data System (ADS)

    Nandhakumar, N.; Smith, Philip W.

    1993-12-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  6. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  7. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-06

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined.

  8. Searching surface orientation of microscopic objects for accurate 3D shape recovery.

    PubMed

    Shim, Seong-O; Mahmood, Muhammad Tariq; Choi, Tae-Sun

    2012-05-01

    In this article, we propose a new shape from focus (SFF) method to estimate 3D shape of microscopic objects using surface orientation cue of each object patch. Most of the SFF algorithms compute the focus value of a pixel from the information of neighboring pixels lying on the same image frame based on an assumption that the small object patch corresponding to the small neighborhood of a pixel is a plane parallel to the focal plane. However, this assumption fails in the optics with limited depth of field where the neighboring pixels of an image have different degree of focus. To overcome this problem, we try to search the surface orientation of the small object patch corresponding to each pixel in the image sequence. Searching of the surface orientation is done indirectly by principal component analysis. Then, the focus value of each pixel is computed from the neighboring pixels lying on the surface perpendicular to the corresponding surface orientation. Experimental results on synthetic and real microscopic objects show that the proposed method produces more accurate 3D shape in comparison to the existing techniques.

  9. Comparison of simulated and experimental 3D laser images using a GmAPD array: application to long range detection

    NASA Astrophysics Data System (ADS)

    Coyac, Antoine; Riviere, Nicolas; Hespel, Laurent; Briottet, Xavier

    2016-05-01

    In this paper, we show the feasibility and the benefit to use a Geiger-mode Avalanche Photo-Diode (GmAPD) array for long range detection, up to several kilometers. A simulation of a Geiger detection sensor is described, which is a part of our end-to-end laser simulator, to generate simulated 3D laser images from synthetic scenes. Resulting 3D point clouds have been compared to experimental acquisitions, performed with our GmAPD 3D camera on similar scenarios. An operational case of long range detection is presented: a copper cable outstretched above the ground, 1 kilometer away the experimental system and with a horizontal line-of-sight (LOS). The detection of such a small object from long distance observation strongly suggests that GmAPD focal plane arrays could be easily used for real-time 3D mapping or surveillance applications from airborne platforms, with good spatial and temporal resolutions.

  10. Twin-beam real-time position estimation of micro-objects in 3D

    NASA Astrophysics Data System (ADS)

    Gurtner, Martin; Zemánek, Jiří

    2016-12-01

    Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, the majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D1; the method is based on twin-beam illumination and requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. The performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as feedback for the controller. The experiments show that the estimate is accurate to within  ∼3 μm in the lateral position and  ∼7 μm in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.

  11. Fast 3-d tomographic microwave imaging for breast cancer detection.

    PubMed

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

  12. Microchip-Based Electrochemical Detection using a 3-D Printed Wall-Jet Electrode Device

    PubMed Central

    Munshi, Akash S.; Martin, R. Scott

    2016-01-01

    Three dimensional (3-D) printing technology has evolved dramatically in the last few years, offering the capability of printing objects with a variety of materials. Printing microfluidic devices using this technology offers various advantages such as ease and uniformity of fabrication, file sharing between laboratories, and increased device-to-device reproducibility. One unique aspects of this technology, when used with electrochemical detection, is the ability to produce a microfluidic device as one unit while also allowing the reuse of the device and electrode for multiple analyses. Here we present an alternate electrode configuration for microfluidic devices, a wall-jet electrode (WJE) approach, created by 3-D printing. Using microchip-based flow injection analysis, we compared the WJE design with the conventionally used thin-layer electrode (TLE) design. It was found that the optimized WJE system enhances analytical performance (as compared to the TLE design), with improvements in sensitivity and the limit of detection. Experiments were conducted using two working electrodes – 500 μm platinum and 1 mm glassy carbon. Using the 500 μm platinum electrode the calibration sensitivity was 16 times higher for the WJE device (as compared to the TLE design). In addition, use of the 1 mm glassy carbon electrode led to limit of detection of 500 nM for catechol, as compared to 6 μM for the TLE device. Finally, to demonstrate the versatility and applicability of the 3-D printed WJE approach, the device was used as an inexpensive electrochemical detector for HPLC. The number of theoretical plates was comparable to the use of commercially available UV and MS detectors, with the WJE device being inexpensive to utilize. These results show that 3D-printing can be a powerful tool to fabricate reusable and integrated microfluidic detectors in configurations that are not easily achieved with more traditional lithographic methods. PMID:26649363

  13. Microchip-based electrochemical detection using a 3-D printed wall-jet electrode device.

    PubMed

    Munshi, Akash S; Martin, R Scott

    2016-02-07

    Three dimensional (3-D) printing technology has evolved dramatically in the last few years, offering the capability of printing objects with a variety of materials. Printing microfluidic devices using this technology offers various advantages such as ease and uniformity of fabrication, file sharing between laboratories, and increased device-to-device reproducibility. One unique aspect of this technology, when used with electrochemical detection, is the ability to produce a microfluidic device as one unit while also allowing the reuse of the device and electrode for multiple analyses. Here we present an alternate electrode configuration for microfluidic devices, a wall-jet electrode (WJE) approach, created by 3-D printing. Using microchip-based flow injection analysis, we compared the WJE design with the conventionally used thin-layer electrode (TLE) design. It was found that the optimized WJE system enhances analytical performance (as compared to the TLE design), with improvements in sensitivity and the limit of detection. Experiments were conducted using two working electrodes - 500 μm platinum and 1 mm glassy carbon. Using the 500 μm platinum electrode the calibration sensitivity was 16 times higher for the WJE device (as compared to the TLE design). In addition, use of the 1 mm glassy carbon electrode led to limit of detection of 500 nM for catechol, as compared to 6 μM for the TLE device. Finally, to demonstrate the versatility and applicability of the 3-D printed WJE approach, the device was used as an inexpensive electrochemical detector for HPLC. The number of theoretical plates was comparable to the use of commercially available UV and MS detectors, with the WJE device being inexpensive to utilize. These results show that 3-D-printing can be a powerful tool to fabricate reusable and integrated microfluidic detectors in configurations that are not easily achieved with more traditional lithographic methods.

  14. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  15. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  16. Robust Detection of Round Shaped Pits Lying on 3D Meshes: Application to Impact Crater Recognition

    NASA Astrophysics Data System (ADS)

    Schmidt, Martin-Pierre; Muscato, Jennifer; Viseur, Sophie; Jorda, Laurent; Bouley, Sylvain; Mari, Jean-Luc

    2015-04-01

    Most celestial bodies display impacts of collisions with asteroids and meteoroids. These traces are called craters. The possibility of observing and identifying these craters and their characteristics (radius, depth and morphology) is the only method available to measure the age of different units at the surface of the body, which in turn allows to constrain its conditions of formation. Interplanetary space probes always carry at least one imaging instrument on board. The visible images of the target are used to reconstruct high-resolution 3D models of its surface as a cloud of points in the case of multi-image dense stereo, or as a triangular mesh in the case of stereo and shape-from-shading. The goal of this work is to develop a methodology to automatically detect the craters lying on these 3D models. The robust extraction of feature areas on surface objects embedded in 3D, like circular pits, is a challenging problem. Classical approaches generally rely on image processing and template matching on a 2D flat projection of the 3D object (i.e.: a high-resolution photograph). In this work, we propose a full-3D method that mainly relies on curvature analysis. Mean and Gaussian curvatures are estimated on the surface. They are used to label vertices that belong to concave parts corresponding to specific pits on the surface. The surface is thus transformed into binary map distinguishing potential crater features to other types of features. Centers are located in the targeted surface regions, corresponding to potential crater features. Concentric rings are then built around the found centers. They consist in circular closed lines exclusively composed of edges of the initial mesh. The first built ring represents the nearest vertex neighborhood of the found center. The ring is then optimally expanded using a circularity constrain and the curvature values of the ring vertices. This method has been tested on a 3D model of the asteroid Lutetia observed by the ROSETTA (ESA

  17. Using rotation for steerable needle detection in 3D color-Doppler ultrasound images.

    PubMed

    Mignon, Paul; Poignet, Philippe; Troccaz, Jocelyne

    2015-08-01

    This paper demonstrates a new way to detect needles in 3D color-Doppler volumes of biological tissues. It uses rotation to generate vibrations of a needle using an existing robotic brachytherapy system. The results of our detection for color-Doppler and B-Mode ultrasound are compared to a needle location reference given by robot odometry and robot ultrasound calibration. Average errors between detection and reference are 5.8 mm on needle tip for B-Mode images and 2.17 mm for color-Doppler images. These results show that color-Doppler imaging leads to more robust needle detection in noisy environment with poor needle visibility or when needle interacts with other objects.

  18. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  19. VIEWNET: a neural architecture for learning to recognize 3D objects from multiple 2D views

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen; Bradski, Gary

    1994-10-01

    A self-organizing neural network is developed for recognition of 3-D objects from sequences of their 2-D views. Called VIEWNET because it uses view information encoded with networks, the model processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the Fuzzy ARTMAP algorithm which learns 2-D view categories. Evidence from sequences of 2-D view categories is stored in a working memory. Voting based on the unordered set of stored categories determines object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view category and of up to 98.5% correct with three 2-D view categories.

  20. High resolution 3D insider detection and tracking.

    SciTech Connect

    Nelson, Cynthia Lee

    2003-09-01

    Vulnerability analysis studies show that one of the worst threats against a facility is that of an active insider during an emergency evacuation. When a criticality or other emergency alarm occurs, employees immediately proceed along evacuation routes to designated areas. Procedures are then implemented to account for all material, classified parts, etc. The 3-Dimensional Video Motion Detection (3DVMD) technology could be used to detect and track possible insider activities during alarm situations, as just described, as well as during normal operating conditions. The 3DVMD technology uses multiple cameras to create 3-dimensional detection volumes or zones. Movement throughout detection zones is tracked and high-level information, such as the number of people and their direction of motion, is extracted. In the described alarm scenario, deviances of evacuation procedures taken by an individual could be immediately detected and relayed to a central alarm station. The insider could be tracked and any protected items removed from the area could be flagged. The 3DVMD technology could also be used to monitor such items as machines that are used to build classified parts. During an alarm, detections could be made if items were removed from the machine. Overall, the use of 3DVMD technology during emergency evacuations would help to prevent the loss of classified items and would speed recovery from emergency situations. Further security could also be added by analyzing tracked behavior (motion) as it corresponds to predicted behavior, e.g., behavior corresponding with the execution of required procedures. This information would be valuable for detecting a possible insider not only during emergency situations, but also during times of normal operation.

  1. Detecting particles flowing through interdigitated 3D microelectrodes.

    PubMed

    Bianchi, Elena; Rollo, Enrica; Kilchenmann, Samuel; Bellati, Francesco M; Accastelli, Enrico; Guiducci, Carlotta

    2012-01-01

    Counting cells in a large microchannel remains challenging and is particularly critical for in vitro assays, such as cell adhesion assays. This paper addresses this issue, by presenting the development of interdigitated three-dimensional electrodes, which are fabricated around passivated pillarshaped silicon microstructures, to detect particles in a flow. The arrays of micropillars occupy the entire channel height and detect the passage of the particle through their gaps by monitoring changes in the electrical resistance. Impedance measurements were employed in order to characterize the electrical equivalent model of the system and to detect the passage of particles in real-time. Three different geometrical micropillar configurations were evaluated and numerical simulations that supported the experimental activity were used to characterize the sensitive volume in the channel. Moreover, the signal-to-noise-ratio related to the passage of a single particle through an array was plotted as a function of the dimension and number of micropillars.

  2. Automated detection of planes in 3-D point clouds using fast Hough transforms

    NASA Astrophysics Data System (ADS)

    Ogundana, Olatokunbo O.; Coggrave, C. Russell; Burguete, Richard L.; Huntley, Jonathan M.

    2011-05-01

    Calibration of 3-D optical sensors often involves the use of calibration artifacts consisting of geometric features, such as 2 or more planes or spheres of known separation. In order to reduce data processing time and minimize user input during calibration, the respective features of the calibration artifact need to be automatically detected and labeled from the measured point clouds. The Hough transform (HT), which is a well-known method for line detection based on foot-of-normal parameterization, has been extended to plane detection in 3-D space. However, the typically sparse intermediate 3-D Hough accumulator space leads to excessive memory storage requirements. A 3-D HT method based on voting in an optimized sparse 3-D matrix model and efficient peak detection in Hough space is described. An alternative 1-D HT is also investigated for rapid detection of nominally parallel planes. Examples of the performance of these methods using simulated and experimental shape data are presented.

  3. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  4. Detecting falls with 3D range camera in ambient assisted living applications: a preliminary study.

    PubMed

    Leone, Alessandro; Diraco, Giovanni; Siciliano, Pietro

    2011-07-01

    In recent years several world-wide ambient assisted living (AAL) programs have been activated in order to improve the quality of life of older people, and to strengthen the industrial base through the use of information and communication technologies. An important issue is extending the time that older people can live in their home environment, by increasing their autonomy and helping them to carry out activities of daily living (ADLs). Research in the automatic detection of falls has received a lot of attention, with the object of enhancing safety, emergency response and independence of the elderly, at the same time comparing the social and economic costs related to fall accidents. In this work, an algorithmic framework to detect falls by using a 3D time-of-flight vision technology is presented. The proposed system presented complementary working requirements with respect to traditional worn and non-worn fall-detection devices. The vision system used a state-of-the-art 3D range camera for elderly movement measurement and detection of critical events, such as falls. The depth images provided by the active sensor allowed reliable segmentation and tracking of elderly movements, by using well-established imaging methods. Moreover, the range camera provided 3D metric information in all illumination conditions (even night vision), allowing the overcoming of some typical limitations of passive vision (shadows, camouflage, occlusions, brightness fluctuations, perspective ambiguity). A self-calibration algorithm guarantees different setup mountings of the range camera by non-technical users. A large dataset of simulated fall events and ADLs in real dwellings was collected and the proposed fall-detection system demonstrated high performance in terms of sensitivity and specificity.

  5. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  6. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  7. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  8. 3D-FFT for Signature Detection in LWIR Images

    SciTech Connect

    Medvick, Patricia A.; Lind, Michael A.; Mackey, Patrick S.; Nuffer, Lisa L.; Foote, Harlan P.

    2007-11-20

    Improvements in analysis detection exploitation are possible by applying whitened matched filtering within the Fourier domain to hyperspectral data cubes. We describe an implementation of a Three Dimensional Fast Fourier Transform Whitened Matched Filter (3DFFTMF) approach and, using several example sets of Long Wave Infra Red (LWIR) data cubes, compare the results with those from standard Whitened Matched Filter (WMF) techniques. Since the variability in shape of gaseous plumes precludes the use of spatial conformation in the matched filtering, the 3DFFTMF results were similar to those of two other WMF methods. Including a spatial low-pass filter within the Fourier space can improve signal to noise ratios and therefore improve detection limit by facilitating the mitigation of high frequency clutter. The improvement only occurs if the low-pass filter diameter is smaller than the plume diameter.

  9. An optimal sensing strategy for recognition and localization of 3-D natural quadric objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hernsoo

    1991-01-01

    An optimal sensing strategy for an optical proximity sensor system engaged in the recognition and localization of 3-D natural quadric objects is presented. The optimal sensing strategy consists of the selection of an optimal beam orientation and the determination of an optimal probing plane that compose an optimal data collection operation known as an optimal probing. The decision of an optimal probing is based on the measure of discrimination power of a cluster of surfaces on a multiple interpretation image (MII), where the measure of discrimination power is defined in terms of a utility function computing the expected number of interpretations that can be pruned out by a probing. An object representation suitable for active sensing based on a surface description vector (SDV) distribution graph and hierarchical tables is presented. Experimental results are shown.

  10. ROOT OO model to render multi-level 3-D geometrical objects via an OpenGL

    NASA Astrophysics Data System (ADS)

    Brun, Rene; Fine, Valeri; Rademakers, Fons

    2001-08-01

    This paper presents a set of C++ low-level classes to render 3D objects within ROOT-based frameworks. This allows developing a set of viewers with different properties the user can choose from to render one and the same 3D objects.

  11. Active learning in the lecture theatre using 3D printed objects

    PubMed Central

    Smith, David P.

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  12. Active learning in the lecture theatre using 3D printed objects.

    PubMed

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  13. Laser Scanning for 3D Object Characterization: Infrastructure for Exploration and Analysis of Vegetation Signatures

    NASA Astrophysics Data System (ADS)

    Koenig, K.; Höfle, B.

    2012-04-01

    Mapping and characterization of the three-dimensional nature of vegetation is increasingly gaining in importance. Deeper insight is required for e.g. forest management, biodiversity assessment, habitat analysis, precision agriculture, renewable energy production or the analysis of interaction between biosphere and atmosphere. However the potential of 3D vegetation characterization has not been exploited so far and new technologies are needed. Laser scanning has evolved into the state-of-the-art technology for highly accurate 3D data acquisition. By now several studies indicated a high value of 3D vegetation description by using laser data. The laser sensors provide a detailed geometric presentation (geometric information) of scanned objects as well as a full profile of laser energy that was scattered back to the sensor (radiometric information). In order to exploit the full potential of these datasets, profound knowledge on laser scanning technology for data acquisition, geoinformation technology for data analysis and object of interest (e.g. vegetation) for data interpretation have to be joined. A signature database is a collection of signatures of reference vegetation objects acquired under known conditions and sensor parameters and can be used to improve information extraction from unclassified vegetation datasets. Different vegetation elements (leaves, branches, etc.) at different heights above ground with different geometric composition contribute to the overall description (i.e. signature) of the scanned object. The developed tools allow analyzing tree objects according to single features (e.g. echo width and signal amplitude) and to any relation of features and derived statistical values (e.g. ratio of laser point attributes). For example, a single backscatter cross section value does not allow for tree species determination, whereas the average echo width per tree segment can give good estimates. Statistical values and/or distributions (e.g. Gaussian

  14. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    PubMed Central

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Péter; Baumbach, Tilo; Cloetens, Peter

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies. PMID:23185554

  15. A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours

    PubMed Central

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-01-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: First, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction — This not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we

  16. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    SciTech Connect

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-03-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems.

  17. Automatic detection of karstic sinkholes in seismic 3D images using circular Hough transform

    NASA Astrophysics Data System (ADS)

    Heydari Parchkoohi, Mostafa; Keshavarz Farajkhah, Nasser; Salimi Delshad, Meysam

    2015-10-01

    More than 30% of hydrocarbon reservoirs are reported in carbonates that mostly include evidence of fractures and karstification. Generally, the detection of karstic sinkholes prognosticate good quality hydrocarbon reservoirs where looser sediments fill the holes penetrating hard limestone and the overburden pressure on infill sediments is mostly tolerated by their sturdier surrounding structure. They are also useful for the detection of erosional surfaces in seismic stratigraphic studies and imply possible relative sea level fall at the time of establishment. Karstic sinkholes are identified straightforwardly by using seismic geometric attributes (e.g. coherency, curvature) in which lateral variations are much more emphasized with respect to the original 3D seismic image. Then, seismic interpreters rely on their visual skills and experience in detecting roughly round objects in seismic attribute maps. In this paper, we introduce an image processing workflow to enhance selective edges in seismic attribute volumes stemming from karstic sinkholes and finally locate them in a high quality 3D seismic image by using circular Hough transform. Afterwards, we present a case study from an on-shore oilfield in southwest Iran, in which the proposed algorithm is applied and karstic sinkholes are traced.

  18. Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.

    PubMed

    Ueda, Yoshiyuki; Saiki, Jun

    2012-01-01

    Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.

  19. Automatic 3D pulmonary nodule detection in CT images: A survey.

    PubMed

    Valente, Igor Rafael S; Cortez, Paulo César; Neto, Edson Cavalcanti; Soares, José Marques; de Albuquerque, Victor Hugo C; Tavares, João Manuel R S

    2016-02-01

    This work presents a systematic review of techniques for the 3D automatic detection of pulmonary nodules in computerized-tomography (CT) images. Its main goals are to analyze the latest technology being used for the development of computational diagnostic tools to assist in the acquisition, storage and, mainly, processing and analysis of the biomedical data. Also, this work identifies the progress made, so far, evaluates the challenges to be overcome and provides an analysis of future prospects. As far as the authors know, this is the first time that a review is devoted exclusively to automated 3D techniques for the detection of pulmonary nodules from lung CT images, which makes this work of noteworthy value. The research covered the published works in the Web of Science, PubMed, Science Direct and IEEEXplore up to December 2014. Each work found that referred to automated 3D segmentation of the lungs was individually analyzed to identify its objective, methodology and results. Based on the analysis of the selected works, several studies were seen to be useful for the construction of medical diagnostic aid tools. However, there are certain aspects that still require attention such as increasing algorithm sensitivity, reducing the number of false positives, improving and optimizing the algorithm detection of different kinds of nodules with different sizes and shapes and, finally, the ability to integrate with the Electronic Medical Record Systems and Picture Archiving and Communication Systems. Based on this analysis, we can say that further research is needed to develop current techniques and that new algorithms are needed to overcome the identified drawbacks.

  20. Performance of a neural-network-based 3-D object recognition system

    NASA Astrophysics Data System (ADS)

    Rak, Steven J.; Kolodzy, Paul J.

    1991-08-01

    Object recognition in laser radar sensor imagery is a challenging application of neural networks. The task involves recognition of objects at a variety of distances and aspects with significant levels of sensor noise. These variables are related to sensor parameters such as sensor signal strength and angular resolution, as well as object range and viewing aspect. The effect of these parameters on a fixed recognition system based on log-polar mapped features and an unsupervised neural network classifier are investigated. This work is an attempt to quantify the design parameters of a laser radar measurement system with respect to classifying and/or identifying objects by the shape of their silhouettes. Experiments with vehicle silhouettes rotated through 90 deg-of-view angle from broadside to head-on ('out-of-plane' rotation) have been used to quantify the performance of a log-polar map/neural-network based 3-D object recognition system. These experiments investigated several key issues such as category stability, category memory compression, image fidelity, and viewing aspect. Initial results indicate a compression from 720 possible categories (8 vehicles X 90 out-of-plane rotations) to a classifier memory with approximately 30 stable recognition categories. These results parallel the human experience of studying an object from several viewing angles yet recognizing it through a wide range of viewing angles. Results are presented illustrating category formation for an eight vehicle dataset as a function of several sensor parameters. These include: (1) sensor noise, as a function of carrier-to-noise ratio; (2) pixels on the vehicle, related to angular resolution and target range; and (3) viewing aspect, as related to sensor-to-platform depression angle. This work contributes to the formation of a three- dimensional object recognition system.

  1. 3D detection of obstacle distribution in walking guide system for the blind

    NASA Astrophysics Data System (ADS)

    Yoon, Myoung-Jong; Yu, Kee-Ho

    2007-12-01

    In this paper, the concept of a walking guide system with tactile display is introduced, and experiments of 3-D obstacle detection and tactile perception are carried out and analyzed. The algorithm of 3-D obstacle detection and the method of mapping the generated obstacle map and the tactile display device for the walking guide system are proposed. The experiment of the 3-D detection for the obstacle position using ultrasonic sensors is performed and estimated. Some design guidelines for a tactile display device that can display obstacle distribution is discussed.

  2. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  3. Laser-assisted direct manufacturing of functionally graded 3D objects

    NASA Astrophysics Data System (ADS)

    Iakovlev, A.; Trunova, E.; Grevey, Dominique; Smurov, Igor

    2003-09-01

    Coaxial powder injection into a laser beam was applied for the laser-assisted direct manufacturing of 3D functionally graded (FG) objects. The powders of Stainless Steel 316L and Stellite grade 12 were applied. The following laser sources were used: (1) quasi-cw CO2 Rofin Sinar laser with 120 μm focal spot diameter and (2) pulsed-periodic Nd:YAG (HAAS HL 304P) with 200 μm focal spot diameter. The objects were fabricated layer-by-layer in the form of "walls", having the thickness of about 200 μm for CO2 laser and 300 μm for Nd:YAG laser. SEM analysis was applied for the FG objects fabricated by CO2 laser, yielding wall elements distribution in vertical direction. It was found that microhardness distribution is fully correlated with the components distribution. The compositional gradient can be smooth or sharp. Periodic multi-layered structures can be obtained as well. Minimal thickness of a layer with the fixed composition (for cw CO2 laser) is about 50 μm. Minimal thickness of a graded material zone, i.e. zone with composition variation from pure stainless steel to pure stellite is about 30 μm.

  4. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  5. 3D change detection in staggered voxels model for robotic sensing and navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.

    2016-05-01

    3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.

  6. Polarization imaging of a 3D object by use of on-axis phase-shifting digital holography.

    PubMed

    Nomura, Takanori; Javidi, Bahram; Murata, Shinji; Nitanai, Eiji; Numata, Takuhisa

    2007-03-01

    A polarimetric imaging method of a 3D object by use of on-axis phase-shifting digital holography is presented. The polarimetric image results from a combination of two kinds of holographic imaging using orthogonal polarized reference waves. Experimental demonstration of a 3D polarimetric imaging is presented.

  7. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  8. Vectorial seismic modeling for 3D objects by the classical solution

    NASA Astrophysics Data System (ADS)

    Ávila-Carrera, R.; Sánchez-Sesma, F. J.; Rodríguez-Castellanos, A.; Ortiz-Alemán, C.

    2010-09-01

    The analytic benchmark solution for the scattering and diffraction of elastic P- and S-waves by a single spherical obstacle is presented in a condensed format. Our aim is divulge to the scientific community this not widely known classical solution to construct a direct seismic model for 3D objects. Some of the benchmark papers are frequently plagued by misprints and none offers results on the transient response. The treatment of the vectorial case appears to be insipient in the literature. The classical solution is a superposition of incident and diffracted fields. Plane P- or S-waves are assumed. They are expressed as expansions of spherical wave functions which are tested against exact results. The diffracted field by the obstacle is calculated from the analytical enforcing of boundary conditions at the scatterer-matrix interface. The spherical obstacle is a cavity, an elastic inclusion or a fluid-filled body. A complete set of wave functions is used in terms of Bessel and Hankel radial functions. Legendre and trigonometric functions are used for the angular coordinates. In order to provide information to calibrate and approximate the seismic modeling for real objects, results are shown in time and frequency domains. Diffracted displacements amplitudes versus normalized frequency and radiation patterns for various scatterer-matrix properties are reported. To study propagation features that may be useful to geophysicists and engineers, synthetic seismograms for some relevant cases are computed.

  9. Laser Fabrication of Affective 3D Objects with 1/f Fluctuation

    NASA Astrophysics Data System (ADS)

    Maekawa, Katsuhiro; Nishii, Tomohiro; Hayashi, Terutake; Akabane, Hideo; Agu, Masahiro

    The present paper describes the application of Kansei Engineering to the physical design of engineering products as well as its realization by laser sintering. We have investigated the affective information that might be included in three-dimensional objects such as a ceramic bowl for the tea ceremony. First, an X-ray CT apparatus is utilized to retrieve surface data from the teabowl, and then a frequency analysis is carried out after noise has been filtered. The surface fluctuation is characterized by a power spectrum that is in inverse proportion to the wave number f in circumference. Second, we consider how to realize the surface with a 1/f fluctuation on a computer screen using a 3D CAD model. The fluctuation is applied to a reference shape assuming that the outer surface has a spiral flow line on which unevenness is superimposed. Finally, the selective laser sintering method has been applied to the fabrication of 1/f fluctuation objects. Nylon powder is sintered layer by layer using a CO2 laser to form an artificial teabowl with complicated surface contours.

  10. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  11. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  12. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere.

  13. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  14. Object detection with single camera stereo

    NASA Astrophysics Data System (ADS)

    McBride, J.; Snorrason, M.; Eaton, R.; Checka, N.; Reiter, A.; Foil, G.; Stevens, M. R.

    2006-05-01

    Many fielded mobile robot systems have demonstrated the importance of directly estimating the 3D shape of objects in the robot's vicinity. The most mature solutions available today use active laser scanning or stereo camera pairs, but both approaches require specialized and expensive sensors. In prior publications, we have demonstrated the generation of stereo images from a single very low-cost camera using structure from motion (SFM) techniques. In this paper we demonstrate the practical usage of single-camera stereo in real-world mobile robot applications. Stereo imagery tends to produce incomplete 3D shape reconstructions of man-made objects because of smooth/glary regions that defeat stereo matching algorithms. We demonstrate robust object detection despite such incompleteness through matching of simple parameterized geometric models. Results are presented where parked cars are detected, and then recognized via license plate recognition, all in real time by a robot traveling through a parking lot.

  15. Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram; Yeom, Seokwon; Moon, Inkyu; Daneshpanah, Mehdi

    2006-05-01

    In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.

  16. Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits

    PubMed Central

    Okoro, C. A.; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J.

    2015-01-01

    The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices. PMID:26664695

  17. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    PubMed

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-02

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique.

  18. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    PubMed

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  19. True-3D Accentuating of Grids and Streets in Urban Topographic Maps Enhances Human Object Location Memory

    PubMed Central

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  20. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  1. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  2. Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

    NASA Astrophysics Data System (ADS)

    Anirudh, Rushil; Thiagarajan, Jayaraman J.; Bremer, Timo; Kim, Hyojin

    2016-03-01

    Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.

  3. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    PubMed

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  4. Flying triangulation--an optical 3D sensor for the motion-robust acquisition of complex objects.

    PubMed

    Ettl, Svenja; Arold, Oliver; Yang, Zheng; Häusler, Gerd

    2012-01-10

    Three-dimensional (3D) shape acquisition is difficult if an all-around measurement of an object is desired or if a relative motion between object and sensor is unavoidable. An optical sensor principle is presented-we call it "flying triangulation"-that enables a motion-robust acquisition of 3D surface topography. It combines a simple handheld sensor with sophisticated registration algorithms. An easy acquisition of complex objects is possible-just by freely hand-guiding the sensor around the object. Real-time feedback of the sequential measurement results enables a comfortable handling for the user. No tracking is necessary. In contrast to most other eligible sensors, the presented sensor generates 3D data from each single camera image.

  5. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  6. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry

    NASA Astrophysics Data System (ADS)

    Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina

    2013-12-01

    The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na

  7. Comparison of 2D and 3D Displays and Sensor Fusion for Threat Detection, Surveillance, and Telepresence

    DTIC Science & Technology

    2003-05-19

    Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence T. Meitzler, Ph. D.a, D. Bednarz, Ph.D.a, K...camouflaged threats are compared on a two dimensional (2D) display and a three dimensional ( 3D ) display. A 3D display is compared alongside a 2D...technologies that take advantage of 3D and sensor fusion will be discussed. 1. INTRODUCTION Computer driven interactive 3D imaging has made

  8. 3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Hacihaliloglu, Ilker; Abugharbieh, Rafeef

    2010-03-01

    Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster than volumetric registration (~4.5 seconds versus 83 seconds).

  9. Integrating Online and Offline 3D Deep Learning for Automated Polyp Detection in Colonoscopy Videos.

    PubMed

    Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng Ann

    2016-12-07

    Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer (CRC) prevention and diagnosis. Traditional manual screening is time-consuming, operator-dependent and error-prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intra-class variations in polyp size, color, shape and texture and low inter-class variations between polyps and hard mimics. In this paper, we propose a novel offline and online 3D deep learning integration framework by leveraging the 3D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with previous methods employing hand-crafted features or 2D-CNNs, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.

  10. A new approach for salt dome detection using a 3D multidirectional edge detector

    NASA Astrophysics Data System (ADS)

    Asjad, Amin; Mohamed, Deriche

    2015-09-01

    Accurate salt dome detection from 3D seismic data is crucial to different seismic data analysis applications. We present a new edge based approach for salt dome detection in migrated 3D seismic data. The proposed algorithm overcomes the drawbacks of existing edge-based techniques which only consider edges in the x (crossline) and y (inline) directions in 2D data and the x (crossline), y (inline), and z (time) directions in 3D data. The algorithm works by combining 3D gradient maps computed along diagonal directions and those computed in x, y, and z directions to accurately detect the boundaries of salt regions. The combination of x, y, and z directions and diagonal edges ensures that the proposed algorithm works well even if the dips along the salt boundary are represented only by weak reflectors. Contrary to other edge and texture based salt dome detection techniques, the proposed algorithm is independent of the amplitude variations in seismic data. We tested the proposed algorithm on the publicly available Netherlands offshore F3 block. The results suggest that the proposed algorithm can detect salt bodies with high accuracy than existing gradient based and texture-based techniques when used separately. More importantly, the proposed approach is shown to be computationally efficient allowing for real time implementation and deployment.

  11. Spherical blurred shape model for 3-D object and pose recognition: quantitative analysis and HCI applications in smart environments.

    PubMed

    Lopes, Oscar; Reyes, Miguel; Escalera, Sergio; Gonzàlez, Jordi

    2014-12-01

    The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.

  12. Detection of complement activation using monoclonal antibodies against C3d

    PubMed Central

    Thurman, Joshua M.; Kulik, Liudmila; Orth, Heather; Wong, Maria; Renner, Brandon; Sargsyan, Siranush A.; Mitchell, Lynne M.; Hourcade, Dennis E.; Hannan, Jonathan P.; Kovacs, James M.; Coughlin, Beth; Woodell, Alex S.; Pickering, Matthew C.; Rohrer, Bärbel; Holers, V. Michael

    2013-01-01

    During complement activation the C3 protein is cleaved, and C3 activation fragments are covalently fixed to tissues. Tissue-bound C3 fragments are a durable biomarker of tissue inflammation, and these fragments have been exploited as addressable binding ligands for targeted therapeutics and diagnostic agents. We have generated cross-reactive murine monoclonal antibodies against human and mouse C3d, the final C3 degradation fragment generated during complement activation. We developed 3 monoclonal antibodies (3d8b, 3d9a, and 3d29) that preferentially bind to the iC3b, C3dg, and C3d fragments in solution, but do not bind to intact C3 or C3b. The same 3 clones also bind to tissue-bound C3 activation fragments when injected systemically. Using mouse models of renal and ocular disease, we confirmed that, following systemic injection, the antibodies accumulated at sites of C3 fragment deposition within the glomerulus, the renal tubulointerstitium, and the posterior pole of the eye. To detect antibodies bound within the eye, we used optical imaging and observed accumulation of the antibodies within retinal lesions in a model of choroidal neovascularization (CNV). Our results demonstrate that imaging methods that use these antibodies may provide a sensitive means of detecting and monitoring complement activation–associated tissue inflammation. PMID:23619360

  13. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  14. Decoupling Object Detection and Categorization

    ERIC Educational Resources Information Center

    Mack, Michael L.; Palmeri, Thomas J.

    2010-01-01

    We investigated whether there exists a behavioral dependency between object detection and categorization. Previous work (Grill-Spector & Kanwisher, 2005) suggests that object detection and basic-level categorization may be the very same perceptual mechanism: As objects are parsed from the background they are categorized at the basic level. In…

  15. Influence of limited random-phase of objects on the image quality of 3D holographic display

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  16. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-11-30

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  17. DIRECT DETECTION OF THE HELICAL MAGNETIC FIELD GEOMETRY FROM 3D RECONSTRUCTION OF PROMINENCE KNOT TRAJECTORIES

    SciTech Connect

    Zapiór, Maciej; Martinez-Gómez, David

    2016-02-01

    Based on the data collected by the Vacuum Tower Telescope located in the Teide Observatory in the Canary Islands, we analyzed the three-dimensional (3D) motion of so-called knots in a solar prominence of 2014 June 9. Trajectories of seven knots were reconstructed, giving information of the 3D geometry of the magnetic field. Helical motion was detected. From the equipartition principle, we estimated the lower limit of the magnetic field in the prominence to ≈1–3 G and from the Ampère’s law the lower limit of the electric current to ≈1.2 × 10{sup 9} A.

  18. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    PubMed

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-08-20

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  19. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    PubMed Central

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  20. High sensitivity plasmonic biosensor based on nanoimprinted quasi 3D nanosquares for cell detection

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyan; Li, Hualin; Yang, Mengsu; Pang, Stella W.

    2016-07-01

    Quasi three-dimensional (3D) plasmonic nanostructures consisting of Au nanosquares on top of SU-8 nanopillars and Au nanoholes on the bottom were developed and fabricated using nanoimprint lithography with simultaneous thermal and UV exposure. These 3D plasmonic nanostructures were used to detect cell concentration of lung cancer A549 cells, retinal pigment epithelial (RPE) cells, and breast cancer MCF-7 cells. Nanoimprint technology has the advantage of producing high uniformity plasmonic nanostructures for such biosensors. Multiple resonance modes were observed in these quasi 3D plasmonic nanostructures. The hybrid coupling of localized surface plasmon resonances and Fabry-Perot cavity modes in the quasi 3D nanostructures resulted in high sensitivity of 496 nm/refractive index unit. The plasmonic resonance peak wavelength and sensitivity could be tuned by varying the Au thickness. Resonance peak shifts for different cells at the same concentration were distinct due to their different cell area and confluency. The cell concentration detection limit covered a large range of 5 × 102 to 1 × 107 cells ml-1 with these new plasmonic nanostructures. They also provide a large resonance peak shift of 51 nm for as little as 0.08 cells mm-2 of RPE cells for high sensitivity cell detection.

  1. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  2. Time Lapse of World’s Largest 3-D Printed Object

    SciTech Connect

    2016-08-29

    Researchers at the MDF have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  3. Infrared Time Lapse of World’s Largest 3D-Printed Object

    SciTech Connect

    2016-08-29

    Researchers at Oak Ridge National Laboratory have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  4. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and

  5. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  6. Simultaneous high-speed 3D flame front detection and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Ebi, Dominik; Clemens, Noel T.

    2016-03-01

    A technique capable of detecting the instantaneous, time-resolved, 3D flame topography is successfully demonstrated in a lean-premixed swirl flame undergoing flashback. A simultaneous measurement of the volumetric velocity field is possible without the need for additional hardware. Droplets which vaporize in the preheat zone of the flame serve as the marker for the flame front. The droplets are illuminated with a laser and imaged from four different views followed by a tomographic reconstruction to obtain the volumetric particle field. Void regions in the reconstructed particle field, which correspond to regions of burnt gas, are detected with a series of image processing steps. The interface separating the void region from regions filled with particles is defined as the flame surface. The velocity field in the unburnt gas is measured using tomographic PIV. The resulting data include the simultaneous 3D flame front and 3D volumetric velocity field at 5 kHz. The technique is applied to a lean-premixed (ϕ  =  0.8), swirling methane-air flame and validated against simultaneously acquired planar measurements. The mean error associated with the reconstructed 3D flame topography is about 0.4 mm, which is smaller than the flame thickness under the studied conditions. The mean error associated with the volumetric velocity field is about 0.2 m s-1.

  7. Harmonic filters for 3D multichannel data: rotation invariant detection of mitoses in colorectal cancer.

    PubMed

    Schlachter, Matthias; Reisert, Marco; Herz, Corinna; Schlürmann, Fabienne; Lassmann, Silke; Werner, Martin; Burkhardt, Hans; Ronneberger, Olaf

    2010-08-01

    In this paper, we present a novel approach for a trainable rotation invariant detection of complex structures in 3D microscopic multichannel data using a nonlinear filter approach. The basic idea of our approach is to compute local features in a window around each 3D position and map these features by means of a nonlinear mapping onto new local harmonic descriptors of the local window. These local harmonic descriptors are then combined in a linear way to form the output of the filter. The optimal combination of the computed local harmonic descriptors is determined in previous training step, and allows the filter to be adapted to an arbitrary structure depending on the problem at hand. Our approach is not limited to scalar-valued images and can also be used for vector-valued (multichannel) images such as gradient vector flow fields. We present realizations of a scalar-valued and a vector-valued multichannel filter. Our proposed algorithm was quantitatively evaluated on colorectal cancer cell lines (cells grown under controlled conditions), on which we successfully detected complex 3D mitotic structures. For a qualitative evaluation we tested our algorithms on human 3D tissue samples of colorectal cancer. We compare our results with a steerable filter approach as well as a morphology-based approach.

  8. Tailoring bulk mechanical properties of 3D printed objects of polylactic acid varying internal micro-architecture

    NASA Astrophysics Data System (ADS)

    Malinauskas, Mangirdas; Skliutas, Edvinas; Jonušauskas, Linas; Mizeras, Deividas; Šešok, Andžela; Piskarskas, Algis

    2015-05-01

    Herein we present 3D Printing (3DP) fabrication of structures having internal microarchitecture and characterization of their mechanical properties. Depending on the material, geometry and fill factor, the manufactured objects mechanical performance can be tailored from "hard" to "soft." In this work we employ low-cost fused filament fabrication 3D printer enabling point-by-point structuring of poly(lactic acid) (PLA) with~̴400 µm feature spatial resolution. The chosen architectures are defined as woodpiles (BCC, FCC and 60 deg rotating). The period is chosen to be of 1200 µm corresponding to 800 µm pores. The produced objects structural quality is characterized using scanning electron microscope, their mechanical properties such as flexural modulus, elastic modulus and stiffness are evaluated by measured experimentally using universal TIRAtest2300 machine. Within the limitation of the carried out study we show that the mechanical properties of 3D printed objects can be tuned at least 3 times by only changing the woodpile geometry arrangement, yet keeping the same filling factor and periodicity of the logs. Additionally, we demonstrate custom 3D printed µ-fluidic elements which can serve as cheap, biocompatible and environmentally biodegradable platforms for integrated Lab-On-Chip (LOC) devices.

  9. Influence of georeference for saturated excess overland flow modelling using 3D volumetric soft geo-objects

    NASA Astrophysics Data System (ADS)

    Izham, Mohamad Yusoff; Muhamad Uznir, Ujang; Alias, Abdul Rahman; Ayob, Katimon; Wan Ruslan, Ismail

    2011-04-01

    Existing 2D data structures are often insufficient for analysing the dynamism of saturation excess overland flow (SEOF) within a basin. Moreover, all stream networks and soil surface structures in GIS must be preserved within appropriate projection plane fitting techniques known as georeferencing. Inclusion of 3D volumetric structure of the current soft geo-objects simulation model would offer a substantial effort towards representing 3D soft geo-objects of SEOF dynamically within a basin by visualising saturated flow and overland flow volume. This research attempts to visualise the influence of a georeference system towards the dynamism of overland flow coverage and total overland flow volume generated from the SEOF process using VSG data structure. The data structure is driven by Green-Ampt methods and the Topographic Wetness Index (TWI). VSGs are analysed by focusing on spatial object preservation techniques of the conformal-based Malaysian Rectified Skew Orthomorphic (MRSO) and the equidistant-based Cassini-Soldner projection plane under the existing geodetic Malaysian Revised Triangulation 1948 (MRT48) and the newly implemented Geocentric Datum for Malaysia (GDM2000) datum. The simulated result visualises deformation of SEOF coverage under different georeference systems via its projection planes, which delineate dissimilar computation of SEOF areas and overland flow volumes. The integration of Georeference, 3D GIS and the saturation excess mechanism provides unifying evidence towards successful landslide and flood disaster management through envisioning the streamflow generating process (mainly SEOF) in a 3D environment.

  10. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  11. Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey.

    PubMed Central

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y; Tsutsui, K

    1998-01-01

    In our previous studies of hand manipulation task-related neurons, we found many neurons of the parietal association cortex which responded to the sight of three-dimensional (3D) objects. Most of the task-related neurons in the AIP area (the lateral bank of the anterior intraparietal sulcus) were visually responsive and half of them responded to objects for manipulation. Most of these neurons were selective for the 3D features of the objects. More recently, we have found binocular visual neurons in the lateral bank of the caudal intraparietal sulcus (c-IPS area) that preferentially respond to a luminous bar or place at a particular orientation in space. We studied the responses of axis-orientation selective (AOS) neurons and surface-orientation selective (SOS) neurons in this area with stimuli presented on a 3D computer graphics display. The AOS neurons showed a stronger response to elongated stimuli and showed tuning to the orientation of the longitudinal axis. Many of them preferred a tilted stimulus in depth and appeared to be sensitive to orientation disparity and/or width disparity. The SOS neurons showed a stronger response to a flat than to an elongated stimulus and showed tuning to the 3D orientation of the surface. Their responses increased with the width or length of the stimulus. A considerable number of SOS neurons responded to a square in a random dot stereogram and were tuned to orientation in depth, suggesting their sensitivity to the gradient of disparity. We also found several SOS neurons that responded to a square with tilted or slanted contours, suggesting their sensitivity to orientation disparity and/or width disparity. Area c-IPS is likely to send visual signals of the 3D features of an object to area AIP for the visual guidance of hand actions. PMID:9770229

  12. Detection of micromechanical deformation under rigid body displacement using twin-pulsed 3D digital holography

    NASA Astrophysics Data System (ADS)

    Perez-Lopez, Carlos; Hernandez-Montes, Maria del Socorro; Mendoza-Santoyo, Fernando

    2005-02-01

    Twin-pulsed digital holography in its 3D set up is used to recover exclusively the micro-mechanical deformation of an object. The test object is allowed to have rigid body movements such as rotation and translation, with the result that the fringe patterns contain information of the latter and the object deformation, a feature that may significantly modify the interpretation of the results. Experimental results from a flat metal plate subject to micro stress and a displacement in the x-z plane are presented to demonstrate that using this optical method it is possible to recover exclusively the contribution of the micro stress.

  13. Detection of inhomogeneities in a metal cylinder using ESPI and 3D pulsed digital holography

    NASA Astrophysics Data System (ADS)

    Saucedo-Anaya, Tonatiuh; Mendoza Santoyo, Fernando; Perez-Lopez, Carlos; de la Torre Ibarra, Manuel

    2004-06-01

    ESPI and 3D pulsed Digital Holography have been applied to detect inhomogeneities inside a metal cylinder. A shaker was employed to produce a mechanical wave that propagates through the inner structure of the cylinder in such a way that it generates vibrational resonant modes on the cylinder surface. An out of plane ESPI optical sensitive configuration was used to detect vibrational resonant modes. A 3D multi-pulse digital holography system was used to obtain quantitative deformation data of the dynamically moving cylinder. The local decrease in structural stiffness inside the cylinder due to an inhomogeneity produces an asymmetry in the resonant mode shape. Results show that the inhomogeneity produces an asymmetry in its vibrational resonant modes. The method may be reliably used to study and compare data from inside homogeneous and inhomogeneous solid materials.

  14. VIRO 3D: fast three-dimensional full-body scanning for humans and other living objects

    NASA Astrophysics Data System (ADS)

    Stein, Norbert; Minge, Bernhard

    1998-03-01

    The development of a family of partial and whole body scanners provides a complete technology for fully three-dimensional and contact-free scans on human bodies or other living objects within seconds. This paper gives insight into the design and the functional principles of the whole body scanner VIRO 3D operating on the basis of the laser split-beam method. The arrangement of up to 24 camera/laser combinations, thus dividing the area into different camera fields and an all- around sensor configuration travelling in vertical direction allow the complete 360-degree-scan of an object within 6 - 20 seconds. Due to a special calibration process the different sensors are matched and the measured data are combined. Up to 10 million 3D measuring points with a resolution of approximately 1 mm are processed in all coordinate axes to generate a 3D model. By means of high-performance processors in combination with real-time image processing chips the image data from almost any number of sensors can be recorded and evaluated synchronously in video real-time. VIRO 3D scanning systems have already been successfully implemented in various applications and will open up new perspectives in different other fields, ranging from industry, orthopaedic medicine, plastic surgery to art and photography.

  15. A HIGHLY COLLIMATED WATER MASER BIPOLAR OUTFLOW IN THE CEPHEUS A HW3d MASSIVE YOUNG STELLAR OBJECT

    SciTech Connect

    Chibueze, James O.; Imai, Hiroshi; Tafoya, Daniel; Omodaka, Toshihiro; Chong, Sze-Ning; Kameya, Osamu; Hirota, Tomoya; Torrelles, Jose M.

    2012-04-01

    We present the results of multi-epoch very long baseline interferometry (VLBI) water (H{sub 2}O) maser observations carried out with the VLBI Exploration of Radio Astrometry toward the Cepheus A HW3d object. We measured for the first time relative proper motions of the H{sub 2}O maser features, whose spatio-kinematics traces a compact bipolar outflow. This outflow looks highly collimated and expanding through {approx}280 AU (400 mas) at a mean velocity of {approx}21 km s{sup -1} ({approx}6 mas yr{sup -1}) without taking into account the turbulent central maser cluster. The opening angle of the outflow is estimated to be {approx}30 Degree-Sign . The dynamical timescale of the outflow is estimated to be {approx}100 years. Our results provide strong support that HW3d harbors an internal massive young star, and the observed outflow could be tracing a very early phase of star formation. We also have analyzed Very Large Array archive data of 1.3 cm continuum emission obtained in 1995 and 2006 toward Cepheus A. The comparative result of the HW3d continuum emission suggests the possibility of the existence of distinct young stellar objects in HW3d and/or strong variability in one of their radio continuum emission components.

  16. Detecting and estimating errors in 3D restoration methods using analog models.

    NASA Astrophysics Data System (ADS)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  17. SERS Active Nanobiosensor Functionalized by Self-Assembled 3D Nickel Nanonetworks for Glutathione Detection.

    PubMed

    Chinnakkannu Vijayakumar, Sivaprasad; Venkatakrishnan, Krishnan; Tan, Bo

    2017-02-15

    We introduce a "non-noble metal" based SERS active nanobiosensor using a self-assembled 3D hybrid nickel nanonetwork. A tunable biomolecule detector fabricated by a bottom-up approach was functionalized using a multiphoton ionization energy mechanism to create a self-assembled 3D hybrid nickel nanonetwork. The nanonetwork was tested for SERS detection of crystal violet (CV) and glutathione (GSH) at two excitation wavelengths, 532 and 785 nm. The results reveal indiscernible peaks with a limit of detection (LOD) of 1 picomolar (pM) concentration. An enhancement factor (EF) of 9.3 × 10(8) was achieved for the chemical molecule CV and 1.8 × 10(9) for the biomolecule GSH, which are the highest reported values so far. The two results, one being the CV molecule proved that nickel nanonetwork is indeed SERS active and the second being the GSH biomolecule detection at both 532 and 785 nm, confirm that the nanonetwork is a biosensor which has potential for both in vivo and in vitro sensing. In addition, the selectivity and versatility of this biosensor is examined with biomolecules such as l-Cysteine, l-Methionine, and sensing GSH in cell culture medium which mimics the complex biological environment. The functionalized self-assembled 3D hybrid nickel nanonetwork exhibits electromagnetic and charge transfer based SERS activation mechanisms.

  18. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  19. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  20. Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space

    PubMed Central

    Tokunaga, Terumasa; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi

    2016-01-01

    To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured. PMID:27271939

  1. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  2. The effects of surface gloss and roughness on color constancy for real 3-D objects.

    PubMed

    Granzier, Jeroen J M; Vergne, Romain; Gegenfurtner, Karl R

    2014-02-21

    Color constancy denotes the phenomenon that the appearance of an object remains fairly stable under changes in illumination and background color. Most of what we know about color constancy comes from experiments using flat, matte surfaces placed on a single plane under diffuse illumination simulated on a computer monitor. Here we investigate whether material properties (glossiness and roughness) have an effect on color constancy for real objects. Subjects matched the color and brightness of cylinders (painted red, green, or blue) illuminated by simulated daylight (D65) or by a reddish light with a Munsell color book illuminated by a tungsten lamp. The cylinders were either glossy or matte and either smooth or rough. The object was placed in front of a black background or a colored checkerboard. We found that color constancy was significantly higher for the glossy objects compared to the matte objects, and higher for the smooth objects compared to the rough objects. This was independent of the background. We conclude that material properties like glossiness and roughness can have significant effects on color constancy.

  3. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  4. Detecting Exoplanets using Bayesian Object Detection

    NASA Astrophysics Data System (ADS)

    Feroz, Farhan

    2015-08-01

    Detecting objects from noisy data-sets is common practice in astrophysics. Object detection presents a particular challenge in terms of statistical inference, not only because of its multi-modal nature but also because it combines both the parameter estimation (for characterizing objects) and model selection problems (in order to quantify the detection). Bayesian inference provides a mathematically rigorous solution to this problem by calculating marginal posterior probabilities of models with different number of sources, but the use of this method in astrophysics has been hampered by the computational cost of evaluating the Bayesian evidence. Nonetheless, Bayesian model selection has the potential to improve the interpretation of existing observational data. I will discuss several Bayesian approaches to object detection problems, both in terms of their theoretical framework and also the practical details about carrying out the computation. I will also describe some recent applications of these methods in the detection of exoplanets.

  5. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  6. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  7. Low-cost impact detection and location for automated inspections of 3D metallic based structures.

    PubMed

    Morón, Carlos; Portilla, Marina P; Somolinos, José A; Morales, Rafael

    2015-05-28

    This paper describes a new low-cost means to detect and locate mechanical impacts (collisions) on a 3D metal-based structure. We employ the simple and reasonably hypothesis that the use of a homogeneous material will allow certain details of the impact to be automatically determined by measuring the time delays of acoustic wave propagation throughout the 3D structure. The location of strategic piezoelectric sensors on the structure and an electronic-computerized system has allowed us to determine the instant and position at which the impact is produced. The proposed automatic system allows us to fully integrate impact point detection and the task of inspecting the point or zone at which this impact occurs. What is more, the proposed method can be easily integrated into a robot-based inspection system capable of moving over 3D metallic structures, thus avoiding (or minimizing) the need for direct human intervention. Experimental results are provided to show the effectiveness of the proposed approach.

  8. Low-Cost Impact Detection and Location for Automated Inspections of 3D Metallic Based Structures

    PubMed Central

    Morón, Carlos; Portilla, Marina P.; Somolinos, José A.; Morales, Rafael

    2015-01-01

    This paper describes a new low-cost means to detect and locate mechanical impacts (collisions) on a 3D metal-based structure. We employ the simple and reasonably hypothesis that the use of a homogeneous material will allow certain details of the impact to be automatically determined by measuring the time delays of acoustic wave propagation throughout the 3D structure. The location of strategic piezoelectric sensors on the structure and an electronic-computerized system has allowed us to determine the instant and position at which the impact is produced. The proposed automatic system allows us to fully integrate impact point detection and the task of inspecting the point or zone at which this impact occurs. What is more, the proposed method can be easily integrated into a robot-based inspection system capable of moving over 3D metallic structures, thus avoiding (or minimizing) the need for direct human intervention. Experimental results are provided to show the effectiveness of the proposed approach. PMID:26029951

  9. PLANETARY NEBULAE DETECTED IN THE SPITZER SPACE TELESCOPE GLIMPSE 3D LEGACY SURVEY

    SciTech Connect

    Zhang Yong; Hsia, Chih-Hao; Kwok, Sun E-mail: xiazh@hku.hk

    2012-01-20

    We used the data from the Spitzer Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) to investigate the mid-infrared (MIR) properties of planetary nebulae (PNs) and PN candidates. In previous studies of GLIMPSE I and II data, we have shown that these MIR data are very useful in distinguishing PNs from other emission-line objects. In the present paper, we focus on the PNs in the field of the GLIMPSE 3D survey, which has a more extensive latitude coverage. We found a total of 90 Macquarie-AAO-Strasbourg (MASH) and MASH II PNs and 101 known PNs to have visible MIR counterparts in the GLIMPSE 3D survey area. The images and photometry of these PNs are presented. Combining the derived IRAC photometry at 3.6, 4.5, 5.8, and 8.0 {mu}m with the existing photometric measurements from other infrared catalogs, we are able to construct spectral energy distributions (SEDs) of these PNs. Among the most notable objects in this survey is the PN M1-41, whose GLIMPSE 3D image reveals a large bipolar structure of more than 3 arcmin in extent.

  10. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  11. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  12. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  13. Robust volumetric change detection using mutual information with 3D fractals

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Akbari, Morris; Henning, Ronda; Pokorny, John

    2014-06-01

    We discuss a robust method for quantifying change of multi-temporal remote sensing point data in the presence of affine registration errors. Three dimensional image processing algorithms can be used to extract and model an electronic module, consisting of a self-contained assembly of electronic components and circuitry, using an ultrasound scanning sensor. Mutual information (MI) is an effective measure of change. We propose a multi-resolution 3D fractal algorithm which is a novel extension to MI or regional mutual information (RMI). Our method is called fractal mutual information (FMI). This extension efficiently takes neighborhood fractal patterns of corresponding voxels (3D pixels) into account. The goal of this system is to quantify the change in a module due to tampering and provide a method for quantitative and qualitative change detection and analysis.

  14. 3D MEMS sensor for application on earthquakes early detection and Nowcast

    NASA Astrophysics Data System (ADS)

    Wu, Jerry; Liang, Jing; Szu, Harold

    2016-05-01

    This paper presents a 3D Microelectromechanical systems (MEMS) sensor system to quickly and reliably identify the precursors that precede every earthquake. When a precursor is detected and is expected to be followed by a major earthquake, the sensor system will analyze and determine the magnitude of the earthquake. This newly proposed 3D MEMS sensor can provide P-waves, S-waves, and surface waves along with timing measurements to a data processing unit. The out coming data is processed and filtered continuously by a set of proposed built-in programmable Digital Signal Process (DSP) filters in order to remove noise and other disturbances and determine an earthquake pattern. Our goal is to reliably initiate an alarm before the arrival of the destructive waves. Keywords:

  15. Demonstration of an Ultrasonic Method for 3-D Visualization of Shallow Buried Underwater Objects

    DTIC Science & Technology

    2011-07-01

    with the X-Y positioning system attached. It is composed of an X-Y gantry system operated by underwater servo motors controlled by the operator’s...user interface errors there are in the software. The test was setup by placing the system over a tank of water containing know objects (Figure 4). The...Requirements Evaluation of all the user interface controls and outputs 3.4.3 Success Criteria 100% error free, all identified bugs have been

  16. A roadmap to global illumination in 3D scenes: solutions for GPU object recognition applications

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Victor H.; Tapia, Juan J.

    2014-09-01

    Light interactions with matter is of remarkable complexity. An adequate modeling of global illumination is a vastly studied topic since the beginning of computer graphics, and still is an unsolved problem. The rendering equation for global illumination is based of refraction and reflection of light in interaction with matter within an environment. This physical process possesses a high computational complexity when implemented in a digital computer. The appearance of an object depends on light interactions with the surface of the material, such as emission, scattering, and absorption. Several image-synthesis methods have been used to realistically render the appearance of light incidence on an object. Recent global illumination algorithms employ mathematical models and computational strategies that improve the efficiency of the simulation solution. This work presents a review the state of the art of global illumination algorithms and focuses on the efficiency of the solution in a computational implementation in a graphics processing unit. A reliable system is developed to simulate realistics scenes in the context of real-time object recognition under different lighting conditions. Computer simulations results are presented and discussed in terms of discrimination capability, and robustness to additive noise, when considering several lighting model reflections and multiple light sources.

  17. Nonthreshold-based event detection for 3d environment monitoring in sensor networks

    SciTech Connect

    Li, M.; Liu, Y.H.; Chen, L.

    2008-12-15

    Event detection is a crucial task for wireless sensor network applications, especially environment monitoring. Existing approaches for event detection are mainly based on some predefined threshold values and, thus, are often inaccurate and incapable of capturing complex events. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds but some complex pattern in the full-scale view of the environmental data. To address this issue, we propose a nonthreshold-based approach for the real 3D sensor monitoring environment. We employ energy-efficient methods to collect a time series of data maps from the sensor network and detect complex events through matching the gathered data to spatiotemporal data patterns. Finally, we conduct trace-driven simulations to prove the efficacy and efficiency of this approach on detecting events of complex phenomena from real-life records.

  18. Surface classification and detection of latent fingerprints based on 3D surface texture parameters

    NASA Astrophysics Data System (ADS)

    Gruhn, Stefan; Fischer, Robert; Vielhauer, Claus

    2012-06-01

    In the field of latent fingerprint detection in crime scene forensics the classification of surfaces has importance. A new method for the scientific analysis of image based information for forensic science was investigated in the last years. Our image acquisition based on a sensor using Chromatic White Light (CWL) with a lateral resolution up to 2 μm. The used FRT-MicroProf 200 CWL 600 measurement device is able to capture high-resolution intensity and topography images in an optical and contact-less way. In prior work, we have suggested to use 2D surface texture parameters to classify various materials, which was a novel approach in the field of criminalistic forensic using knowledge from surface appearance and a chromatic white light sensor. A meaningful and useful classification of different crime scene specific surfaces is not existent. In this work, we want to extend such considerations by the usage of fourteen 3D surface parameters, called 'Birmingham 14'. In our experiment we define these surface texture parameters and use them to classify ten different materials in this test set-up and create specific material classes. Further it is shown in first experiments, that some surface texture parameters are sensitive to separate fingerprints from carrier surfaces. So far, the use of surface roughness is mainly known within the framework of material quality control. The analysis and classification of the captured 3D-topography images from crime scenes is important for the adaptive preprocessing depending on the surface texture. The adaptive preprocessing in dependency of surface classification is necessary for precise detection because of the wide variety of surface textures. We perform a preliminary study in usage of these 3D surface texture parameters as feature for the fingerprint detection. In combination with a reference sample we show that surface texture parameters can be an indication for a fingerprint and can be a feature in latent fingerprint detection.

  19. Object Detection under Noisy Condition

    NASA Astrophysics Data System (ADS)

    Halkarnikar, P. P.; Khandagle, H. P.; Talbar, S. N.; Vasambekar, P. N.

    2010-11-01

    Identifying moving objects from a video sequence is a fundamental and critical task in many computer-vision applications. Such automatic object detection soft wares have many applications in surveillance, auto navigation and robotics. A common approach is to perform background subtraction, which identifies the moving object from portion of video sequences. These soft wares work good under normal condition but tend to give false alarms when tested in real life conditions. Such a condition arises due to fog, smoke, glares ect. These situations are termed as noisy conditions and objects are detected under such conditions. In this paper we created noise by addition of standard Gaussian noise in clean video and compare the response of the detection system to various noise level.

  20. 3D profile measurements of objects by using zero order Generalized Morse Wavelet

    NASA Astrophysics Data System (ADS)

    Kocahan, Özlem; Durmuş, ćaǧla; Elmas, Merve Naz; Coşkun, Emre; Tiryaki, Erhan; Özder, Serhat

    2017-02-01

    Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The phase distribution is extracted from the optical fringe pattern choosing zero order Generalized Morse Wavelet (GMW) as a mother wavelet. In this study, standard fringe projection technique is used for obtaining images. Experimental results for the GMW phase method are compared with the results of Morlet and Paul wavelet transform.

  1. Strain-Initialized Robust Bone Surface Detection in 3-D Ultrasound.

    PubMed

    Hussain, Mohammad Arafat; Hodgson, Antony J; Abugharbieh, Rafeef

    2017-03-01

    Three-dimensional ultrasound has been increasingly considered as a safe radiation-free alternative to radiation-based fluoroscopic imaging for surgical guidance during computer-assisted orthopedic interventions, but because ultrasound images contain significant artifacts, it is challenging to automatically extract bone surfaces from these images. We propose an effective way to extract 3-D bone surfaces using a surface growing approach that is seeded from 2-D bone contours. The initial 2-D bone contours are estimated from a combination of ultrasound strain images and envelope power images. Novel features of the proposed method include: (i) improvement of a previously reported 2-D strain imaging-based bone segmentation method by incorporation of a depth-dependent cumulative power of the envelope into the elastographic data; (ii) incorporation of an echo decorrelation measure-based weight to fuse the strain and envelope maps; (iii) use of local statistics of the bone surface candidate points to detect the presence of any bone discontinuity; and (iv) an extension of our 2-D bone contour into a 3-D bone surface by use of an effective surface growing approach. Our new method produced average improvements in the mean absolute error of 18% and 23%, respectively, on 2-D and 3-D experimental phantom data, compared with those of two state-of-the-art bone segmentation methods. Validation on 2-D and 3-D clinical in vivo data also reveals, respectively, an average improvement in the mean absolute fitting error of 55% and an 18-fold improvement in the computation time.

  2. Early detection of liver fibrosis in rats using 3-D ultrasound Nakagami imaging: a feasibility evaluation.

    PubMed

    Ho, Ming-Chih; Tsui, Po-Hsiang; Lee, Yu-Hsin; Chen, Yung-Sheng; Chen, Chiung-Nien; Lin, Jen-Jen; Chang, Chien-Cheng

    2014-09-01

    We investigated the feasibility of using 3-D ultrasound Nakagami imaging to detect the early stages of liver fibrosis in rats. Fibrosis was induced in livers of rats (n = 60) by intraperitoneal injection of 0.5% dimethylnitrosamine (DMN). Group 1 was the control group, and rats in groups 2-6 received DMN injections for 1-5 weeks, respectively. Each rat was sacrificed to perform 3-D ultrasound scanning of the liver in vitro using a single-element transducer of 6.5 MHz. The 3-D raw data acquired at a sampling rate of 50 MHz were used to construct 3-D Nakagami images. The liver specimen was further used for histologic analysis with hematoxylin and eosin and Masson staining to score the degree of liver fibrosis. The results indicate that the Metavir scores of the hematoxylin and eosin-stained sections in Groups 1-4 were 0 (defined as early liver fibrosis in this study), and those in groups 5 and 6 ranged from 1 to 2 and 2 to 3, respectively. To quantify the degree of early liver fibrosis, the histologic sections with Masson stain were analyzed to calculate the number of fiber-related blue pixels. The number of blue pixels increased from (2.36 ± 0.79) × 10(4) (group 1) to (7.68 ± 2.62) × 10(4) (group 4) after DMN injections for 3 weeks, indicating that early stages of liver fibrosis were successfully induced in rats. The Nakagami parameter increased from 0.36 ± 0.02 (group 1) to 0.55 ± 0.03 (group 4), with increasing numbers of blue pixels in the Masson-stained sections (p-value < 0.05, t-test). We concluded that 3-D Nakagami imaging has potential in the early detection of liver fibrosis in rats and may serve as an image-based pathologic model to visually track fibrosis formation and growth.

  3. Automated kidney detection for 3D ultrasound using scan line searching

    NASA Astrophysics Data System (ADS)

    Noll, Matthias; Nadolny, Anne; Wesarg, Stefan

    2016-04-01

    Ultrasound (U/S) is a fast and non-expensive imaging modality that is used for the examination of various anatomical structures, e.g. the kidneys. One important task for automatic organ tracking or computer-aided diagnosis is the identification of the organ region. During this process the exact information about the transducer location and orientation is usually unavailable. This renders the implementation of such automatic methods exceedingly challenging. In this work we like to introduce a new automatic method for the detection of the kidney in 3D U/S images. This novel technique analyses the U/S image data along virtual scan lines. Here, characteristic texture changes when entering and leaving the symmetric tissue regions of the renal cortex are searched for. A subsequent feature accumulation along a second scan direction produces a 2D heat map of renal cortex candidates, from which the kidney location is extracted in two steps. First, the strongest candidate as well as its counterpart are extracted by heat map intensity ranking and renal cortex size analysis. This process exploits the heat map gap caused by the renal pelvis region. Substituting the renal pelvis detection with this combined cortex tissue feature increases the detection robustness. In contrast to model based methods that generate characteristic pattern matches, our method is simpler and therefore faster. An evaluation performed on 61 3D U/S data sets showed, that in 55 cases showing none or minor shadowing the kidney location could be correctly identified.

  4. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  5. Calculations of Arctic ozone chemistry using objectively analyzed data in a 3-D CTM

    NASA Technical Reports Server (NTRS)

    Kaminski, J. W.; Mcconnell, J. C.; Sandilands, J. W.

    1994-01-01

    A three-dimensional chemical transport model (CTM) (Kaminski, 1992) has been used to study the evolution of the Arctic ozone during the winter of 1992. The continuity equation has been solved using a spectral method with Rhomboidal 15 (R15) truncation and leap-frog time stepping. Six-hourly meteorological fields from the Canadian Meteorological Center global objective analysis routines run at T79 were degraded to the model resolution. In addition, they were interpolated to the model time grid and were used to drive the model from the surface to 10 mb. In the model, processing of Cl(x) occurred over Arctic latitudes but some of the initial products were still present by mid-January. Also, the large amounts of ClO formed in the model in early January were converted to ClNO3. The results suggest that the model resolution may be insufficient to resolve the details of the Arctic transport during this time period. In particular, the wind field does not move the ClO(x) 'cloud' to the south over Europe as seen in the MLS measurements.

  6. Detection of a concealed object

    DOEpatents

    Keller, Paul E.; Hall, Thomas E.; McMakin, Douglas L.

    2008-04-29

    Disclosed are systems, methods, devices, and apparatus to determine if a clothed individual is carrying a suspicious, concealed object. This determination includes establishing data corresponding to an image of the individual through interrogation with electromagnetic radiation in the 200 MHz to 1 THz range. In one form, image data corresponding to intensity of reflected radiation and differential depth of the reflecting surface is received and processed to detect the suspicious, concealed object.

  7. Detection of a concealed object

    DOEpatents

    Keller, Paul E [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA

    2010-11-16

    Disclosed are systems, methods, devices, and apparatus to determine if a clothed individual is carrying a suspicious, concealed object. This determination includes establishing data corresponding to an image of the individual through interrogation with electromagnetic radiation in the 200 MHz to 1 THz range. In one form, image data corresponding to intensity of reflected radiation and differential depth of the reflecting surface is received and processed to detect the suspicious, concealed object.

  8. Biocompatible 3D SERS substrate for trace detection of amino acids and melamine.

    PubMed

    Satheeshkumar, Elumalai; Karuppaiya, Palaniyandi; Sivashanmugan, Kundan; Chao, Wei-Ting; Tsay, Hsin-Sheng; Yoshimura, Masahiro

    2017-03-21

    A novel, low-cost and biocompatible three-dimensional (3D) substrate for surface-enhanced Raman spectroscopy (SERS) is fabricated using gold nanoparticles (AuNPs) loaded on cellulose paper for detection of amino acids and melamine. Dysosma pleiantha rhizome (Dp-Rhi) capped AuNPs (Dp-Rhi_AuNPs) were prepared by in situ using aqueous extract of Dp-Rhi and in situ functionalized Dp-Rhi on AuNPs surface was verified by Fourier transform infrared spectroscopy and zeta potentials analysis shows a negative (-18.4mV) surface charges, which confirm that presence of Dp-Rhi on AuNPs. The biocompatibility of Dp-Rhi_AuNPs is also examined by cell viability of FaDu cells using MTS assay and compared to control group. In conclusion, the SERS performance of AuNPs@cellulose paper substrates were systematically demonstrated and examined with different excitation wavelengths (i.e. 532, 632.8 and 785nm lasers) and the as-prepared 3D substrates provided an enhancement factor approaching 7 orders of magnitude compared with conventional Raman intensity using para-nitrothiophenol (p-NTP), para-aminothiophenol (p-ATP) and para-mercaptobenzoic acid (p-MBA) as probe molecules. The strong electromagnetic effect was generated at the interface of AuNPs and pre-treated roughened cellulose paper is also investigated by simulation in which the formation of possible Raman hot-spot zone in fiber-like microstructure of cellulose paper decorated with AuNPs. Notably, with optimized condition of as-prepared 3D AuNPs@cellulose paper is highly sensitive in the SERS detection of aqueous tyrosine (10(-10)M) and melamine (10(-9)M).

  9. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  10. Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; de With, Peter H. N.; Korsten, Hendrikus H. M.; Mihajlovic, Nenad

    2015-03-01

    Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.

  11. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above

    PubMed Central

    van Doorn, Andrea J.; Wagemans, Johan

    2016-01-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329

  12. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect by Jason Owens ARL-TN-0474 March 2012 Approved... Kinect Jason Owens Vehicle Technology Directorate, ARL Approved for public release...2. REPORT TYPE Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Object Detection using the Kinect 5a. CONTRACT NUMBER 5b.

  13. Parallel phase-shifting digital holography and its application to high-speed 3D imaging of dynamic object

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Xia, Peng; Wang, Yexin; Matoba, Osamu

    2016-03-01

    Digital holography is a technique of 3D measurement of object. The technique uses an image sensor to record the interference fringe image containing the complex amplitude of object, and numerically reconstructs the complex amplitude by computer. Parallel phase-shifting digital holography is capable of accurate 3D measurement of dynamic object. This is because this technique can reconstruct the complex amplitude of object, on which the undesired images are not superimposed, form a single hologram. The undesired images are the non-diffraction wave and the conjugate image which are associated with holography. In parallel phase-shifting digital holography, a hologram, whose phase of the reference wave is spatially and periodically shifted every other pixel, is recorded to obtain complex amplitude of object by single-shot exposure. The recorded hologram is decomposed into multiple holograms required for phase-shifting digital holography. The complex amplitude of the object is free from the undesired images is reconstructed from the multiple holograms. To validate parallel phase-shifting digital holography, a high-speed parallel phase-shifting digital holography system was constructed. The system consists of a Mach-Zehnder interferometer, a continuous-wave laser, and a high-speed polarization imaging camera. Phase motion picture of dynamic air flow sprayed from a nozzle was recorded at 180,000 frames per second (FPS) have been recorded by the system. Also phase motion picture of dynamic air induced by discharge between two electrodes has been recorded at 1,000,000 FPS, when high voltage was applied between the electrodes.

  14. 3D polypyrrole structures as a sensing material for glucose detection

    NASA Astrophysics Data System (ADS)

    Cysewska, Karolina; Szymańska, Magdalena; Jasiński, Piotr

    2016-11-01

    In this work, 3D polypyrrole (PPy) structures as material for glucose detection is proposed. Polypyrrole was electrochemically polymerized on platinum screen-printed electrode from an aqueous solution of lithium perchlorate and pyrrole. The growth mechanism of such PPy structures was studied by ex-situ scanning electron microscopy. Preliminary studies show that studied here PPy film is a good candidate as a sensing material for glucose biosensor. It exhibits very high sensitivity (28.5 mA·mM-1·cm-2) and can work without any additional dopants, mediators or enzymes. It was also shown that glucose detection depends on the PPy morphology. The same PPy material was immobilized with the glucose oxidase enzyme. Such material exhibited higher signal response, however it lost its stability very fast.

  15. 3D Ag/ZnO hybrids for sensitive surface-enhanced Raman scattering detection

    NASA Astrophysics Data System (ADS)

    Huang, Chenyue; Xu, Chunxiang; Lu, Junfeng; Li, Zhaohui; Tian, Zhengshan

    2016-03-01

    To combine the surface plasma resonance of metal and local field enhancement in metal/semiconductor interface, Ag nanoparticles (NPs) were assembled on a ZnO nanorod array which was grown by hydrothermally on carbon fibers. The construction of dimensional (3D) Surface-Enhanced Raman Scattering (SERS) substrate is used for the sensitive detection of organic pollutants with the advantages such as facile synthesis, short detection time and low cost. The hybrid substrate was manifested a high sensitivity to phenol red at a lower concentration of 1 × 10-9 M and a higher enhancement factor of 3.18 × 109. Moreover, the ZnO nanostructures decorated with Ag NPs were demonstrated self-cleaning function under UV irradiation via photocatalytic degradation of the analytic molecules. The fabrication process of the materials and sensors, optimization of the SERS behaviors for different sized Ag NPs, the mechanism of SERS and recovery were presented with a detailed discussion.

  16. Diffuse reflectance optical topography: location of inclusions in 3D and detectability limits

    PubMed Central

    Carbone, N. A.; Baez, G. R.; García, H. A.; Waks Serra, M. V.; Di Rocco, H. O.; Iriarte, D. I.; Pomarico, J. A.; Grosenick, D.; Macdonald, R.

    2014-01-01

    In the present contribution we investigate the images of CW diffusely reflected light for a point-like source, registered by a CCD camera imaging a turbid medium containing an absorbing lesion. We show that detection of μa variations (absorption anomalies) is achieved if images are normalized to background intensity. A theoretical analysis based on the diffusion approximation is presented to investigate the sensitivity and the limitations of our proposal and a novel procedure to find the location of the inclusions in 3D is given and tested. An analysis of the noise and its influence on the detection capabilities of our proposal is provided. Experimental results on phantoms are also given, supporting the proposed approach. PMID:24876999

  17. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  18. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  19. Application of image stitching in rail abrasion 3D online detection

    NASA Astrophysics Data System (ADS)

    Lee, Jinlong; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke; Luo, Lin

    2016-09-01

    PMP (Phase measuring Profilometry) is an excellent 3D online measurement method for its high precision. However, the measuring range is limited. While the rail is so long that far exceeds the measuring limit, the image stitching should be used to extent it. In this paper, based on the improved Stoilov algorithm, the rail shape is three-dimensionally reconstructed and the abrasion is detected combines image stitching. Two types of schemes are researched: (1)image stitching is firstly used on the deformed fringe patterns and then a larger range rail is constructed with Stoilov algorithm; (2)the three-dimensional construction of two fringe pattern is firstly performed, and then the constructed images are stitched into longer rail. In this paper, the improved Stoilov algorithm based on statistical approach and stitching algorithm are analyzed. 3D Peaks function is simulated to verify the two methods, and then three-dimensional rail shape is recovered based on these two methods and the rail abrasion is measured with the relative precision of higher than 0.1%, which is much higher than traditional methods, such as linear laser scanning.

  20. Early detection of skin cancer via terahertz spectral profiling and 3D imaging.

    PubMed

    Rahman, Anis; Rahman, Aunik K; Rao, Babar

    2016-08-15

    Terahertz scanning reflectometry, terahertz 3D imaging and terahertz time-domain spectroscopy have been used to identify features in human skin biopsy samples diagnosed for basal cell carcinoma (BCC) and compared with healthy skin samples. It was found from the 3D images that the healthy skin samples exhibit regular cellular pattern while the BCC skin samples indicate lack of regular cell pattern. The skin is a highly layered structure organ; this is evident from the thickness profile via a scan through the thickness of the healthy skin samples, where, the reflected intensity of the terahertz beam exhibits fluctuations originating from different skin layers. Compared to the healthy skin samples, the BCC samples' profiles exhibit significantly diminished layer definition; thus indicating a lack of cellular order. In addition, terahertz time-domain spectroscopy reveals significant and quantifiable differences between the healthy and BCC skin samples. Thus, a combination of three different terahertz techniques constitutes a conclusive route for detecting the BCC condition on a cellular level compared to the healthy skin.

  1. 3D Seismic Flexure Analysis for Subsurface Fault Detection and Fracture Characterization

    NASA Astrophysics Data System (ADS)

    Di, Haibin; Gao, Dengliang

    2017-03-01

    Seismic flexure is a new geometric attribute with the potential of delineating subtle faults and fractures from three-dimensional (3D) seismic surveys, especially those overlooked by the popular discontinuity and curvature attributes. Although the concept of flexure and its related algorithms have been published in the literature, the attribute has not been sufficiently applied to subsurface fault detection and fracture characterization. This paper provides a comprehensive study of the flexure attribute, including its definition, computation, as well as geologic implications for evaluating the fundamental fracture properties that are essential to fracture characterization and network modeling in the subsurface, through applications to the fractured reservoir at Teapot Dome, Wyoming (USA). Specifically, flexure measures the third-order variation of the geometry of a seismic reflector and is dependent on the measuring direction in 3D space; among all possible directions, flexure is considered most useful when extracted perpendicular to the orientation of dominant deformation; and flexure offers new insights into qualitative/quantitative fracture characterization, with its magnitude indicating the intensity of faulting and fracturing, its azimuth defining the orientation of most-likely fracture trends, and its sign differentiating the sense of displacement of faults and fractures.

  2. Application for 3d Scene Understanding in Detecting Discharge of Domesticwaste Along Complex Urban Rivers

    NASA Astrophysics Data System (ADS)

    Ninsalam, Y.; Qin, R.; Rekittke, J.

    2016-06-01

    In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1) a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2) depth for each image is generated through a backward projection of the point clouds; 3) a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D) data; 4) point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5) then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.

  3. 3D Seismic Flexure Analysis for Subsurface Fault Detection and Fracture Characterization

    NASA Astrophysics Data System (ADS)

    Di, Haibin; Gao, Dengliang

    2016-10-01

    Seismic flexure is a new geometric attribute with the potential of delineating subtle faults and fractures from three-dimensional (3D) seismic surveys, especially those overlooked by the popular discontinuity and curvature attributes. Although the concept of flexure and its related algorithms have been published in the literature, the attribute has not been sufficiently applied to subsurface fault detection and fracture characterization. This paper provides a comprehensive study of the flexure attribute, including its definition, computation, as well as geologic implications for evaluating the fundamental fracture properties that are essential to fracture characterization and network modeling in the subsurface, through applications to the fractured reservoir at Teapot Dome, Wyoming (USA). Specifically, flexure measures the third-order variation of the geometry of a seismic reflector and is dependent on the measuring direction in 3D space; among all possible directions, flexure is considered most useful when extracted perpendicular to the orientation of dominant deformation; and flexure offers new insights into qualitative/quantitative fracture characterization, with its magnitude indicating the intensity of faulting and fracturing, its azimuth defining the orientation of most-likely fracture trends, and its sign differentiating the sense of displacement of faults and fractures.

  4. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  5. Construction of 3D micropatterned surfaces with wormlike and superhydrophilic PEG brushes to detect dysfunctional cells.

    PubMed

    Hou, Jianwen; Shi, Qiang; Ye, Wei; Fan, Qunfu; Shi, Hengchong; Wong, Shing-Chung; Xu, Xiaodong; Yin, Jinghua

    2014-12-10

    Detection of dysfunctional and apoptotic cells plays an important role in clinical diagnosis and therapy. To develop a portable and user-friendly platform for dysfunctional and aging cell detection, we present a facile method to construct 3D patterns on the surface of styrene-b-(ethylene-co-butylene)-b-styrene elastomer (SEBS) with poly(ethylene glycol) brushes. Normal red blood cells (RBCs) and lysed RBCs (dysfunctional cells) are used as model cells. The strategy is based on the fact that poly(ethylene glycol) brushes tend to interact with phosphatidylserine, which is in the inner leaflet of normal cell membranes but becomes exposed in abnormal or apoptotic cell membranes. We demonstrate that varied patterned surfaces can be obtained by selectively patterning atom transfer radical polymerization (ATRP) initiators on the SEBS surface via an aqueous-based method and growing PEG brushes through surface-initiated atom transfer radical polymerization. The relatively high initiator density and polymerization temperature facilitate formation of PEG brushes in high density, which gives brushes worm-like morphology and superhydrophilic property; the tendency of dysfunctional cells adhered on the patterned surfaces is completely different from well-defined arrays of normal cells on the patterned surfaces, providing a facile method to detect dysfunctional cells effectively. The PEG-patterned surfaces are also applicable to detect apoptotic HeLa cells. The simplicity and easy handling of the described technique shows the potential application in microdiagnostic devices.

  6. Tubular Enhanced Geodesic Active Contours for Continuum Robot Detection using 3D Ultrasound.

    PubMed

    Ren, Hongliang; Dupont, Pierre E

    2012-01-01

    Three dimensional ultrasound is a promising imaging modality for minimally invasive robotic surgery. As the robots are typically metallic, they interact strongly with the sound waves in ways that are not modeled by the ultrasound system's signal processing algorithms. Consequently, they produce substantial imaging artifacts that can make image guidance difficult, even for experienced surgeons. This paper introduces a new approach for detecting curved continuum robots in 3D ultrasound images. The proposed approach combines geodesic active contours with a speed function that is based on enhancing the "tubularity" of the continuum robot. In particular, it takes advantage of the known robot diameter along its length. It also takes advantage of the fact that the robot surface facing the ultrasound probe provides the most accurate image. This method, termed Tubular Enhanced Geodesic Active Contours (TEGAC), is demonstrated through ex vivo intracardiac experiments to offer superior performance compared to conventional active contours.

  7. Detecting Distance between Injected Microspheres and Target Tumor via 3D Reconstruction of Tissue Sections

    SciTech Connect

    Carson, James P.; Kuprat, Andrew P.; Colby, Sean M.; Davis, Cassi A.; Basciano, Christopher; Greene, Kevin; Feo, John T.; Kennedy, Andrew

    2012-08-28

    One treatment increasing in use for solid tumors in the liver is radioembolization via the delivery of 90Y microspheres to the vascular bed within or near the location of the tumor. It is desirable as part of the treatment for the microspheres to embed preferentially in or near the tumor. This work details an approach for analyzing the deposition of microspheres with respect to the location of the tumor. The approach used is based upon thin-slice serial sectioning of the tissue sample, followed by high resolution imaging, microsphere detection, and 3-D reconstruction of the tumor surface. Distance from the microspheres to the tumor was calculated using a fast deterministic point inclusion method.

  8. Automatic Dent-landmark detection in 3-D CBCT dental volumes.

    PubMed

    Cheng, Erkang; Chen, Jinwu; Yang, Jie; Deng, Huiyang; Wu, Yi; Megalooikonomou, Vasileios; Gable, Bryce; Ling, Haibin

    2011-01-01

    Orthodontic craniometric landmarks provide critical information in oral and maxillofacial imaging diagnosis and treatment planning. The Dent-landmark, defined as the odontoid process of the epistropheus, is one of the key landmarks to construct the midsagittal reference plane. In this paper, we propose a learning-based approach to automatically detect the Dent-landmark in the 3D cone-beam computed tomography (CBCT) dental data. Specifically, a detector is learned using the random forest with sampled context features. Furthermore, we use spacial prior to build a constrained search space other than use the full three dimensional space. The proposed method has been evaluated on a dataset containing 73 CBCT dental volumes and yields promising results.

  9. Development and Evaluation of Roadside/Obstacle Detection Method Using 3D Scanned Data Processing

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hiroshi; Ishii, Yoshinori; Yamazaki, Katsuyuki

    In this paper, we have reported the development of a snowblower support system which can safely navigate snowblowers, even during a whiteout, with the combination of a very accurate GPS system, so called RTK-GPS, and a unique and highly accurate map of roadsides and obstacles on roads. Particularly emphasized new techniques in this paper are ways to detect accurate geographical positions of roadsides and obstacles by utilizing and analyzing 3D laser scanned data, whose data has become available in recent days. The experiment has shown that the map created by the methods and RTK-GPS can sufficiently navigate snowblowers, whereby a secure and pleasant social environment can be archived in snow areas of Japan. In addition, proposed methods are expected to be useful for other systems such as a quick development of a highly accurate road map, a safely navigation of a wheeled chair, and so on.

  10. Detection accuracy of condylar bony defects in Promax 3D cone beam CT images scanned with different protocols

    PubMed Central

    Zhang, Z-L; Cheng, J-G; Li, G; Shi, X-Q; Zhang, J-Z; Zhang, Z-Y; Ma, X-C

    2013-01-01

    Objectives: To investigate and compare the detection accuracy of bony defects on the condylar surface of the temporomandibular joint (TMJ) in cone beam CT (CBCT) images scanned with standard and large view protocols on the same machine. Methods: 21 dry human skulls with 42 TMJs were scanned with the large view and standard view protocols of the CBCT scanner Promax 3D (Planmeca, Helsinki, Finland). Seven observers evaluated all the images for the presence or absence of defects on the surface of the condyle. Using the macroscopic examination of condylar defects as the gold standard, receiver operating characteristic (ROC) analysis was performed. Results: Macroscopic examination revealed that, of the 42 condyles, 18 were normal and 24 had a defect on the surface of the condyles. Areas under the ROC curves for the large view and the standard view group of CBCT images were 0.739 and 0.720, respectively, and no significant difference was found between the two groups of images (p = 0.902). Neither the interobserver nor the intraobserver variability were significant. Conclusions: The two scanning protocols provided by the CBCT scanner Promax 3D were reliable and comparable with detection of condylar defects. PMID:23420852

  11. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    NASA Astrophysics Data System (ADS)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  12. Detecting Genetic Association of Common Human Facial Morphological Variation Using High Density 3D Image Registration

    PubMed Central

    Hu, Sile; Zhou, Hang; Guo, Jing; Jin, Li; Tang, Kun

    2013-01-01

    Human facial morphology is a combination of many complex traits. Little is known about the genetic basis of common facial morphological variation. Existing association studies have largely used simple landmark-distances as surrogates for the complex morphological phenotypes of the face. However, this can result in decreased statistical power and unclear inference of shape changes. In this study, we applied a new image registration approach that automatically identified the salient landmarks and aligned the sample faces using high density pixel points. Based on this high density registration, three different phenotype data schemes were used to test the association between the common facial morphological variation and 10 candidate SNPs, and their performances were compared. The first scheme used traditional landmark-distances; the second relied on the geometric analysis of 15 landmarks and the third used geometric analysis of a dense registration of ∼30,000 3D points. We found that the two geometric approaches were highly consistent in their detection of morphological changes. The geometric method using dense registration further demonstrated superiority in the fine inference of shape changes and 3D face modeling. Several candidate SNPs showed potential associations with different facial features. In particular, one SNP, a known risk factor of non-syndromic cleft lips/palates, rs642961 in the IRF6 gene, was validated to strongly predict normal lip shape variation in female Han Chinese. This study further demonstrated that dense face registration may substantially improve the detection and characterization of genetic association in common facial variation. PMID:24339768

  13. Multi-hole seismic modeling in 3-D space and cross-hole seismic tomography analysis for boulder detection

    NASA Astrophysics Data System (ADS)

    Cheng, Fei; Liu, Jiangping; Wang, Jing; Zong, Yuquan; Yu, Mingyu

    2016-11-01

    A boulder stone, a common geological feature in south China, is referred to the remnant of a granite body which has been unevenly weathered. Undetected boulders could adversely impact the schedule and safety of subway construction when using tunnel boring machine (TBM) method. Therefore, boulder detection has always been a key issue demanded to be solved before the construction. Nowadays, cross-hole seismic tomography is a high resolution technique capable of boulder detection, however, the method can only solve for velocity in a 2-D slice between two wells, and the size and central position of the boulder are generally difficult to be accurately obtained. In this paper, the authors conduct a multi-hole wave field simulation and characteristic analysis of a boulder model based on the 3-D elastic wave staggered-grid finite difference theory, and also a 2-D imaging analysis based on first arrival travel time. The results indicate that (1) full wave field records could be obtained from multi-hole seismic wave simulations. Simulation results describe that the seismic wave propagation pattern in cross-hole high-velocity spherical geological bodies is more detailed and can serve as a basis for the wave field analysis. (2) When a cross-hole seismic section cuts through the boulder, the proposed method provides satisfactory cross-hole tomography results; however, when the section is closely positioned to the boulder, such high-velocity object in the 3-D space would impact on the surrounding wave field. The received diffracted wave interferes with the primary wave and in consequence the picked first arrival travel time is not derived from the profile, which results in a false appearance of high-velocity geology features. Finally, the results of 2-D analysis in 3-D modeling space are comparatively analyzed with the physical model test vis-a-vis the effect of high velocity body on the seismic tomographic measurements.

  14. Shape and motion reconstruction from 3D-to-1D orthographically projected data via object-image relations.

    PubMed

    Ferrara, Matthew; Arnold, Gregory; Stuff, Mark

    2009-10-01

    This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.

  15. Computer-aided detection of masses in digital tomosynthesis mammography: combination of 3D and 2D detection information

    NASA Astrophysics Data System (ADS)

    Chan, Heang-Ping; Wei, Jun; Zhang, Yiheng; Moore, Richard H.; Kopans, Daniel B.; Hadjiiski, Lubomir; Sahiner, Berkman; Roubidoux, Marilyn A.; Helvie, Mark A.

    2007-03-01

    We are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBTs). The CAD system includes two parallel processes. In the first process, mass detection and feature analysis are performed in the reconstructed 3D DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant (LDA) classifier. In the second process, mass detection and feature analysis are applied to the individual projection view (PV) images. A mass likelihood score is estimated for each mass candidate using another LDA classifier. The mass likelihood images derived from the PVs are back-projected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The mass likelihood scores estimated by the two processes at the corresponding 3D location are then merged and evaluated using FROC analysis. In this preliminary study, a data set of 52 DBT cases acquired with a GE prototype system at the Massachusetts General Hospital was used. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In an FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at an average FP rate of 1.6 and 3.0 per breast, respectively. In comparison, the average FP rates of the combined system were 1.2 and 2.3 per breast, respectively, at the same sensitivities. The combined system is a promising approach to improving mass detection on DBTs.

  16. Hepatic 3D spheroid models for the detection and study of compounds with cholestatic liability

    PubMed Central

    Hendriks, Delilah F. G.; Fredriksson Puigvert, Lisa; Messner, Simon; Mortiz, Wolfgang; Ingelman-Sundberg, Magnus

    2016-01-01

    Drug-induced cholestasis (DIC) is poorly understood and its preclinical prediction is mainly limited to assessing the compound’s potential to inhibit the bile salt export pump (BSEP). Here, we evaluated two 3D spheroid models, one from primary human hepatocytes (PHH) and one from HepaRG cells, for the detection of compounds with cholestatic liability. By repeatedly co-exposing both models to a set of compounds with different mechanisms of hepatotoxicity and a non-toxic concentrated bile acid (BA) mixture for 8 days we observed a selective synergistic toxicity of compounds known to cause cholestatic or mixed cholestatic/hepatocellular toxicity and the BA mixture compared to exposure to the compounds alone, a phenomenon that was more pronounced after extending the exposure time to 14 days. In contrast, no such synergism was observed after both 8 and 14 days of exposure to the BA mixture for compounds that cause non-cholestatic hepatotoxicity. Mechanisms behind the toxicity of the cholestatic compound chlorpromazine were accurately detected in both spheroid models, including intracellular BA accumulation, inhibition of ABCB11 expression and disruption of the F-actin cytoskeleton. Furthermore, the observed synergistic toxicity of chlorpromazine and BA was associated with increased oxidative stress and modulation of death receptor signalling. Combined, our results demonstrate that the hepatic spheroid models presented here can be used to detect and study compounds with cholestatic liability. PMID:27759057

  17. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    PubMed Central

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  18. Acquiring multi-viewpoint image of 3D object for integral imaging using synthetic aperture phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Jeong, Min-Ok; Kim, Nam; Park, Jae-Hyeung; Jeon, Seok-Hee; Gil, Sang-Keun

    2009-02-01

    We propose a method generating elemental images for the auto-stereoscopic three-dimensional display technique, integral imaging, using phase-shifting digital holography. Phase shifting digital holography is a way recording the digital hologram by changing phase of the reference beam and extracting the complex field of the object beam. Since all 3D information is captured by the phase-shifting digital holography, the elemental images for any specifications of the lens array can be generated from single phase-shifting digital holography. We expanded the viewing angle of the generated elemental image by using the synthetic aperture phase-shifting digital hologram. The principle of the proposed method is verified experimentally.

  19. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    PubMed

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  20. A novel Hessian based algorithm for rat kidney glomerulus detection in 3D MRI

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wu, Teresa; Bennett, Kevin M.

    2015-03-01

    The glomeruli of the kidney perform the key role of blood filtration and the number of glomeruli in a kidney is correlated with susceptibility to chronic kidney disease and chronic cardiovascular disease. This motivates the development of new technology using magnetic resonance imaging (MRI) to measure the number of glomeruli and nephrons in vivo. However, there is currently a lack of computationally efficient techniques to perform fast, reliable and accurate counts of glomeruli in MR images due to the issues inherent in MRI, such as acquisition noise, partial volume effects (the mixture of several tissue signals in a voxel) and bias field (spatial intensity inhomogeneity). Such challenges are particularly severe because the glomeruli are very small, (in our case, a MRI image is ~16 million voxels, each glomerulus is in the size of 8~20 voxels), and the number of glomeruli is very large. To address this, we have developed an efficient Hessian based Difference of Gaussians (HDoG) detector to identify the glomeruli on 3D rat MR images. The image is first smoothed via DoG followed by the Hessian process to pre-segment and delineate the boundary of the glomerulus candidates. This then provides a basis to extract regional features used in an unsupervised clustering algorithm, completing segmentation by removing the false identifications occurred in the pre-segmentation. The experimental results show that Hessian based DoG has the potential to automatically detect glomeruli,from MRI in 3D, enabling new measurements of renal microstructure and pathology in preclinical and clinical studies.

  1. Neural network system for 3-D object recognition and pose estimation from a single arbitrary 2-D view

    NASA Astrophysics Data System (ADS)

    Khotanzad, Alireza R.; Liou, James H.

    1992-09-01

    In this paper, a robust, and fast system for recognition as well as pose estimation of a 3-D object from a single 2-D perspective of it taken from an arbitrary viewpoint is developed. The approach is invariant to location, orientation, and scale of the object in the perspective. The silhouette of the object in the 2-D perspective is first normalized with respect to location and scale. A set of rotation invariant features derived from complex and orthogonal pseudo- Zernike moments of the image are then extracted. The next stage includes a bank of multilayer feed-forward neural networks (NN) each of which classifies the extracted features. The training set for these nets consists of perspective views of each object taken from several different viewing angles. The NNs in the bank differ in the size of their hidden layer nodes as well as their initial conditions but receive the same input. The classification decisions of all the nets are combined through a majority voting scheme. It is shown that this collective decision making yields better results compared to a single NN operating alone. After the object is classified, two of its pose parameters, namely elevation and aspect angles, are estimated by another module of NNs in a two-stage process. The first stage identifies the likely region of the space that the object is being viewed from. In the second stage, an NN estimator for the identified region is used to compute the pose angles. Extensive experimental studies involving clean and noisy images of seven military ground vehicles are carried out. The performance is compared to two other traditional methods, namely a nearest neighbor rule and a binary decision tree classifier and it is shown that our approach has major advantages over them.

  2. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  3. 3-D Stent Detection in Intravascular OCT Using a Bayesian Network and Graph Search

    PubMed Central

    Wang, Zhao; Jenkins, Michael W.; Linderman, George C.; Bezerra, Hiram G.; Fujino, Yusuke; Costa, Marco A.; Wilson, David L.

    2015-01-01

    Worldwide, many hundreds of thousands of stents are implanted each year to revascularize occlusions in coronary arteries. Intravascular optical coherence tomography (OCT) is an important emerging imaging technique, which has the resolution and contrast necessary to quantitatively analyze stent deployment and tissue coverage following stent implantation. Automation is needed, as current, it takes up to 16 hours to manually analyze hundreds of images and thousands of stent struts from a single pullback. For automated strut detection, we used image formation physics and machine learning via a Bayesian network, and 3-D knowledge of stent structure via graph search. Graph search was done on en face projections using minimum spanning tree algorithms. Depths of all struts in a pullback were simultaneously determined using graph cut. To assess the method, we employed the largest validation data set used so far, involving more than 8,000 clinical images from 103 pullbacks from 72 patients. Automated strut detection achieved a 0.91±0.04 recall, and 0.84±0.08 precision. Performance was robust in images of varying quality. This method can improve the workflow for analysis of stent clinical trial data, and can potentially be used in the clinic to facilitate real-time stent analysis and visualization, aiding stent implantation. PMID:25751863

  4. Automated White Matter Hyperintensity Detection in Multiple Sclerosis Using 3D T2 FLAIR

    PubMed Central

    Zhong, Yi; Wang, Ying; Kang, Yan; Haacke, E. Mark

    2014-01-01

    White matter hyperintensities (WMH) seen on T2WI are a hallmark of multiple sclerosis (MS) as it indicates inflammation associated with the disease. Automatic detection of the WMH can be valuable in diagnosing and monitoring of treatment effectiveness. T2 fluid attenuated inversion recovery (FLAIR) MR images provided good contrast between the lesions and other tissue; however the signal intensity of gray matter tissue was close to the lesions in FLAIR images that may cause more false positives in the segment result. We developed and evaluated a tool for automated WMH detection only using high resolution 3D T2 fluid attenuated inversion recovery (FLAIR) MR images. We use a high spatial frequency suppression method to reduce the gray matter area signal intensity. We evaluate our method in 26 MS patients and 26 age matched health controls. The data from the automated algorithm showed good agreement with that from the manual segmentation. The linear correlation between these two approaches in comparing WMH volumes was found to be Y = 1.04X + 1.74  (R2 = 0.96). The automated algorithm estimates the number, volume, and category of WMH. PMID:25136355

  5. Label-free optical detection of cells grown in 3D silicon microstructures.

    PubMed

    Merlo, Sabina; Carpignano, Francesca; Silva, Gloria; Aredia, Francesca; Scovassi, A Ivana; Mazzini, Giuliano; Surdo, Salvatore; Barillaro, Giuseppe

    2013-08-21

    We demonstrate high aspect-ratio photonic crystals that could serve as three-dimensional (3D) microincubators for cell culture and also provide label-free optical detection of the cells. The investigated microstructures, fabricated by electrochemical micromachining of standard silicon wafers, consist of periodic arrays of silicon walls separated by narrow deeply etched air-gaps (50 μm high and 5 μm wide) and feature the typical spectral properties of photonic crystals in the wavelength range 1.0-1.7 μm: their spectral reflectivity is characterized by wavelength regions where reflectivity is high (photonic bandgaps), separated by narrow wavelength regions where reflectivity is very low. In this work, we show that the presence of cells, grown inside the gaps, strongly affects light propagation across the photonic crystal and, therefore, its spectral reflectivity. Exploiting a label-free optical detection method, based on a fiberoptic setup, we are able to probe the extension of cells adherent to the vertical silicon walls with a non-invasive direct testing. In particular, the intensity ratio at two wavelengths is the experimental parameter that can be well correlated to the cell spreading on the silicon wall inside the gaps.

  6. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  7. Topomorphologic Separation of Fused Isointensity Objects via Multiscale Opening: Separating Arteries and Veins in 3-D Pulmonary CT

    PubMed Central

    Gao, Zhiyun; Alford, Sara K.; Sonka, Milan; Hoffman, Eric A.

    2015-01-01

    A novel multiscale topomorphologic approach for opening of two isointensity objects fused at different locations and scales is presented and applied to separating arterial and venous trees in 3-D pulmonary multidetector X-ray computed tomography (CT) images. Initialized with seeds, the two isointensity objects (arteries and veins) grow iteratively while maintaining their spatial exclusiveness and eventually form two mutually disjoint objects at convergence. The method is intended to solve the following two fundamental challenges: how to find local size of morphological operators and how to trace continuity of locally separated regions. These challenges are met by combining fuzzy distance transform (FDT), a morphologic feature with a topologic fuzzy connectivity, and a new morphological reconstruction step to iteratively open finer and finer details starting at large scales and progressing toward smaller scales. The method employs efficient user intervention at locations where local morphological separability assumption does not hold due to imaging ambiguities or any other reason. The approach has been validated on mathematically generated tubular objects and applied to clinical pulmonary noncontrast CT data for separating arteries and veins. The tradeoff between accuracy and the required user intervention for the method has been quantitatively examined by comparing with manual outlining. The experimental study, based on a blind seed selection strategy, has demonstrated that above 95% accuracy may be achieved using 25–40 seeds for each of arteries and veins. Our method is very promising for semiautomated separation of arteries and veins in pulmonary CT images even when there is no object-specific intensity variation at conjoining locations. PMID:20199919

  8. Computer-aided detection of lung nodules: false positive reduction using a 3D gradient field method

    NASA Astrophysics Data System (ADS)

    Ge, Zhanyu; Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Wei, Jun; Bogot, Naama; Cascade, Philip N.; Kazerooni, Ella A.; Zhou, Chuan

    2004-05-01

    We are developing a computer-aided detection system to aid radiologists in diagnosing lung cancer in thoracic computed tomographic (CT) images. The purpose of this study was to improve the false-positive (FP) reduction stage of our algorithm by developing and incorporating a gradient field technique. This technique extracts 3D shape information from the gray-scale values within a volume of interest. The gradient field feature values are higher for spherical objects, and lower for elongated and irregularly-shaped objects. A data set of 55 thin CT scans from 40 patients was used to evaluate the usefulness of the gradient field technique. After initial nodule candidate detection and rule-based first stage FP reduction, there were 3487 FP and 65 true positive (TP) objects in our data set. Linear discriminant classifiers with and without the gradient field feature were designed for the second stage FP reduction. The accuracy of these classifiers was evaluated using the area Az under the receiver operating characteristic (ROC) curve. The Az values were 0.93 and 0.91 with and without the gradient field feature, respectively. The improvement with the gradient field feature was statistically significant (p=0.01).

  9. Detection of bone erosions in early rheumatoid arthritis: 3D ultrasonography versus computed tomography.

    PubMed

    Peluso, G; Bosello, S L; Gremese, E; Mirone, L; Di Gregorio, F; Di Molfetta, V; Pirronti, T; Ferraccioli, G

    2015-07-01

    Three-dimensional (3D) volumetric ultrasonography (US) is an interesting tool that could improve the traditional approach to musculoskeletal US in rheumatology, due to its virtual operator independence and reduced examination time. The aim of this study was to investigate the performance of 3DUS in the detection of bone erosions in hand and wrist joints of early rheumatoid arthritis (ERA) patients, with computed tomography (CT) as the reference method. Twenty ERA patients without erosions on standard radiography of hands and wrists underwent 3DUS and CT evaluation of eleven joints: radiocarpal, intercarpal, ulnocarpal, second to fifth metacarpo-phalangeal (MCP), and second to fifth proximal interphalangeal (PIP) joints of dominant hand. Eleven (55.0%) patients were erosive with CT and ten of them were erosive also at 3DUS evaluation. In five patients, 3DUS identified cortical breaks that were not erosions at CT evaluation. Considering CT as the gold standard to identify erosive patients, the 3DUS sensitivity, specificity, PPV, and NPV were 0.9, 0.55, 0.71, and 0.83, respectively. A total of 32 erosions were detected with CT, 15 of them were also observed at the same sites with 3DUS, whereas 17 were not seen on 3DUS evaluation. The majority of these 3DUS false-negative erosions were in the wrist joints. Furthermore, 18 erosions recorded by 3DUS were false positive. The majority of these 3DUS false-positive erosions were located at PIP joints. This study underlines the limits of 3DUS in detecting individual bone erosion, mostly at the wrist, despite the good sensitivity in identifying erosive patients.

  10. [Detection of oculomotor nerve compression by 3D-FIESTA MRI in a patient with pituitary apoplexy and diabetes mellitus].

    PubMed

    Yamauchi, Takahiro; Kitai, Ryuhei; Neishi, Hiroyuki; Tsunetoshi, Kenzo; Matsuda, Ken; Arishima, Hidetaka; Kodera, Toshiaki; Arai, Yoshikazu; Takeuchi, Hiroaki; Kikuta, Ken-ichiro

    2014-02-01

    We report the usefulness of 3D-FIESTA magnetic resonance imaging(MRI)for the detection of oculomotor nerve palsy in a case of pituitary apoplexy. A 69-year-old man with diabetes mellitus presented with complete left-side blepharoptosis. Computed tomography of the brain showed an intrasellar mass with hemorrhage. MRI demonstrated a pituitary adenoma with a cyst toward the left cavernous sinus, which was diagnosed as pituitary apoplexy. 3D-FIESTA revealed that the left oculomotor nerve was compressed by the cyst. He underwent trans-sphenoid tumor resection at 5 days after his hospitalization. Post-operative 3D-FIESTA MRI revealed decrease in compression of the left oculomotor nerve by the cyst. His left oculomotor palsy recovered completely within a few months. Oculomotor nerve palsy can occur due to various diseases, and 3D-FIESTA MRI is useful for detection of oculomotor nerve compression, especially in the field of parasellar lesions.

  11. Boosted Random Ferns for Object Detection.

    PubMed

    Villamizar, Michael; Andrade-Cetto, Juan; Sanfeliu, Alberto; Moreno-Noguer, Francesc

    2017-03-01

    In this paper we introduce the Boosted Random Ferns (BRFs) to rapidly build discriminative classifiers for learning and detecting object categories. At the core of our approach we use standard random ferns, but we introduce four main innovations that let us bring ferns from an instance to a category level, and still retain efficiency. First, we define binary features on the histogram of oriented gradients-domain (as opposed to intensity-), allowing for a better representation of intra-class variability. Second, both the positions where ferns are evaluated within the sliding window, and the location of the binary features for each fern are not chosen completely at random, but instead we use a boosting strategy to pick the most discriminative combination of them. This is further enhanced by our third contribution, that is to adapt the boosting strategy to enable sharing of binary features among different ferns, yielding high recognition rates at a low computational cost. And finally, we show that training can be performed online, for sequentially arriving images. Overall, the resulting classifier can be very efficiently trained, densely evaluated for all image locations in about 0.1 seconds, and provides detection rates similar to competing approaches that require expensive and significantly slower processing times. We demonstrate the effectiveness of our approach by thorough experimentation in publicly available datasets in which we compare against state-of-the-art, and for tasks of both 2D detection and 3D multi-view estimation.

  12. Orienting of visuo-spatial attention in complex 3D space: Search and detection

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2015-01-01

    The ability to detect changes in the environment is necessary for appropriate interactions with the external world. Changes in the background go more unnoticed than foreground changes, possibly because attention prioritizes processing of foreground/near stimuli. Here, we investigated the detectability of foreground and background changes within natural scenes and the influence of stereoscopic depth cues on this. Using a flicker paradigm, we alternated a pair of images that were exactly same or differed for one single element (i.e., a color change of one object in the scene). The participants were asked to find the change that occurred either in a foreground or background object, while viewing the stimuli either with binocular and monocular cues (bmC) or monocular cues only (mC). The behavioral results showed faster and more accurate detections for foreground changes and overall better performance in bmC than mC conditions. The imaging results highlighted the involvement of fronto-parietal attention controlling networks during active search and target detection. These attention networks did not show any differential effect as function of the presence/absence of the binocular cues, or the detection of foreground/background changes. By contrast, the lateral occipital cortex showed greater activation for detections in foreground compared to background, while area V3A showed a main effect of bmC vs. mC, specifically during search. These findings indicate that visual search with binocular cues does not impose any specific requirement on attention-controlling fronto-parietal networks, while the enhanced detection of front/near objects in the bmC condition reflects bottom-up sensory processes in visual cortex. Hum Brain Mapp 36:2231–2247, 2015. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:25691253

  13. A New Methodology for 3D Target Detection in Automotive Radar Applications

    PubMed Central

    Baselice, Fabio; Ferraioli, Giampaolo; Lukin, Sergyi; Matuozzo, Gianfranco; Pascazio, Vito; Schirinzi, Gilda

    2016-01-01

    Today there is a growing interest in automotive sensor monitoring systems. One of the main challenges is to make them an effective and valuable aid in dangerous situations, improving transportation safety. The main limitation of visual aid systems is that they do not produce accurate results in critical visibility conditions, such as in presence of rain, fog or smoke. Radar systems can greatly help in overcoming such limitations. In particular, imaging radar is gaining interest in the framework of Driver Assistance Systems (DAS). In this manuscript, a new methodology able to reconstruct the 3D imaged scene and to detect the presence of multiple targets within each line of sight is proposed. The technique is based on the use of Compressive Sensing (CS) theory and produces the estimation of multiple targets for each line of sight, their range distance and their reflectivities. Moreover, a fast approach for 2D focus based on the FFT algorithm is proposed. After the description of the proposed methodology, different simulated case studies are reported in order to evaluate the performances of the proposed approach. PMID:27136558

  14. A New Methodology for 3D Target Detection in Automotive Radar Applications.

    PubMed

    Baselice, Fabio; Ferraioli, Giampaolo; Lukin, Sergyi; Matuozzo, Gianfranco; Pascazio, Vito; Schirinzi, Gilda

    2016-04-29

    Today there is a growing interest in automotive sensor monitoring systems. One of the main challenges is to make them an effective and valuable aid in dangerous situations, improving transportation safety. The main limitation of visual aid systems is that they do not produce accurate results in critical visibility conditions, such as in presence of rain, fog or smoke. Radar systems can greatly help in overcoming such limitations. In particular, imaging radar is gaining interest in the framework of Driver Assistance Systems (DAS). In this manuscript, a new methodology able to reconstruct the 3D imaged scene and to detect the presence of multiple targets within each line of sight is proposed. The technique is based on the use of Compressive Sensing (CS) theory and produces the estimation of multiple targets for each line of sight, their range distance and their reflectivities. Moreover, a fast approach for 2D focus based on the FFT algorithm is proposed. After the description of the proposed methodology, different simulated case studies are reported in order to evaluate the performances of the proposed approach.

  15. Detection of 3D tree root systems using high resolution ground penetration radar

    NASA Astrophysics Data System (ADS)

    Altdorff, D.; Honds, M.; Botschek, J.; Van Der Kruk, J.

    2014-12-01

    demonstrated approach is a promising tool for semi-linear root detection, whereas advanced 3D processing and migration is needed for more complicated root structures.

  16. 3D shape and eccentricity measurements of fast rotating rough objects by two mutually tilted interference fringe systems

    NASA Astrophysics Data System (ADS)

    Czarske, J. W.; Kuschmierz, R.; Günther, P.

    2013-06-01

    Precise measurements of distance, eccentricity and 3D-shape of fast moving objects such as turning parts of lathes, gear shafts, magnetic bearings, camshafts, crankshafts and rotors of vacuum pumps are on the one hand important tasks. On the other hand they are big challenges, since contactless precise measurement techniques are required. Optical techniques are well suitable for distance measurements of non-moving surfaces. However, measurements of laterally fast moving surfaces are still challenging. For such tasks the laser Doppler distance sensor technique was invented by the TU Dresden some years ago. This technique has been realized by two mutually tilted interference fringe systems, where the distance is coded in the phase difference between the generated interference signals. However, due to the speckle effect different random envelopes and phase jumps of the interference signals occur. They disturb the phase difference estimation between the interference signals. In this paper, we will report on a scientific breakthrough on the measurement uncertainty budget which has been achieved recently. Via matching of the illumination and receiving optics the measurement uncertainty of the displacement and distance can be reduced by about one magnitude. For displacement measurements of a recurring rough surface a standard deviation of 110 nm were attained at lateral velocities of 5 m / s. Due to the additionally measured lateral velocity and the rotational speed, the two-dimensional shape of rotating objects is calculated. The three-dimensional shape can be conducted by employment of a line camera. Since the measurement uncertainty of the displacement, vibration, distance, eccentricity, and shape is nearly independent of the lateral surface velocity, this technique is predestined for fast-rotating objects. Especially it can be advantageously used for the quality control of workpieces inside of a lathe towards the reduction of process tolerances, installation times and

  17. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  18. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  19. Objective 3D surface evaluation of intracranial electrophysiologic correlates of cerebral glucose metabolic abnormalities in children with focal epilepsy.

    PubMed

    Jeong, Jeong-Won; Asano, Eishi; Kumar Pilli, Vinod; Nakai, Yasuo; Chugani, Harry T; Juhász, Csaba

    2017-03-21

    To determine the spatial relationship between 2-deoxy-2[(18) F]fluoro-D-glucose (FDG) metabolic and intracranial electrophysiological abnormalities in children undergoing two-stage epilepsy surgery, statistical parametric mapping (SPM) was used to correlate hypo- and hypermetabolic cortical regions with ictal and interictal electrocorticography (ECoG) changes mapped onto the brain surface. Preoperative FDG-PET scans of 37 children with intractable epilepsy (31 with non-localizing MRI) were compared with age-matched pseudo-normal pediatric control PET data. Hypo-/hypermetabolic maps were transformed to 3D-MRI brain surface to compare the locations of metabolic changes with electrode coordinates of the ECoG-defined seizure onset zone (SOZ) and interictal spiking. While hypometabolic clusters showed a good agreement with the SOZ on the lobar level (sensitivity/specificity = 0.74/0.64), detailed surface-distance analysis demonstrated that large portions of ECoG-defined SOZ and interictal spiking area were located at least 3 cm beyond hypometabolic regions with the same statistical threshold (sensitivity/specificity = 0.18-0.25/0.94-0.90 for overlap 3-cm distance); for a lower threshold, sensitivity for SOZ at 3 cm increased to 0.39 with a modest compromise of specificity. Performance of FDG-PET SPM was slightly better in children with smaller as compared with widespread SOZ. The results demonstrate that SPM utilizing age-matched pseudocontrols can reliably detect the lobe of seizure onset. However, the spatial mismatch between metabolic and EEG epileptiform abnormalities indicates that a more complete SOZ detection could be achieved by extending intracranial electrode coverage at least 3 cm beyond the metabolic abnormality. Considering that the extent of feasible electrode coverage is limited, localization information from other modalities is particularly important to optimize grid coverage in cases of large hypometabolic cortex. Hum Brain Mapp, 2017. © 2017

  20. Single cell detection using 3D magnetic rolled-up structures.

    PubMed

    Ger, Tzong-Rong; Huang, Hao-Ting; Huang, Chen-Yu; Lai, Mei-Feng

    2013-11-07

    A 3D rolled-up structure made of a SiO2 layer and a fishbone-like magnetic thin film was proposed here as a biosensor. The magnetoresistance (MR) measurement results of the sensor suggest that the presence of the stray field, which is induced by the magnetic nanoparticles, significantly increased the switching field. Comparing the performance of the 2D sensor and 3D sensor designed in this study, the response in switching field variation was 12.14% in the 2D sensor and 62.55% in the 3D sensor. The response in MR ratio variation was 4.55% in the 2D sensor and 82.32% in the 3D sensor. In addition, the design of the 3D sensor structure also helped to attract and trap a single magnetic cell due to its stronger stray field compared with the 2D structure. The 3D magnetic biosensor designed here can provide important information for future biochip research and applications.

  1. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  2. Automatic Building Damage Detection Method Using High-Resolution Remote Sensing Images and 3d GIS Model

    NASA Astrophysics Data System (ADS)

    Tu, Jihui; Sui, Haigang; Feng, Wenqing; Song, Zhina

    2016-06-01

    In this paper, a novel approach of building damaged detection is proposed using high resolution remote sensing images and 3D GIS-Model data. Traditional building damage detection method considers to detect damaged building due to earthquake, but little attention has been paid to analyze various building damaged types(e.g., trivial damaged, severely damaged and totally collapsed.) Therefore, we want to detect the different building damaged type using 2D and 3D feature of scenes because the real world we live in is a 3D space. The proposed method generalizes that the image geometric correction method firstly corrects the post-disasters remote sensing image using the 3D GIS model or RPC parameters, then detects the different building damaged types using the change of the height and area between the pre- and post-disasters and the texture feature of post-disasters. The results, evaluated on a selected study site of the Beichuan earthquake ruins, Sichuan, show that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and well suited for rapid damage assessment after natural disasters.

  3. Automatic organ localizations on 3D CT images by using majority-voting of multiple 2D detections based on local binary patterns and Haar-like features

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamaguchi, Shoutarou; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-02-01

    This paper describes an approach to accomplish the fast and automatic localization of the different inner organ regions on 3D CT scans. The proposed approach combines object detections and the majority voting technique to achieve the robust and quick organ localization. The basic idea of proposed method is to detect a number of 2D partial appearances of a 3D target region on CT images from multiple body directions, on multiple image scales, by using multiple feature spaces, and vote all the 2D detecting results back to the 3D image space to statistically decide one 3D bounding rectangle of the target organ. Ensemble learning was used to train the multiple 2D detectors based on template matching on local binary patterns and Haar-like feature spaces. A collaborative voting was used to decide the corner coordinates of the 3D bounding rectangle of the target organ region based on the coordinate histograms from detection results in three body directions. Since the architecture of the proposed method (multiple independent detections connected to a majority voting) naturally fits the parallel computing paradigm and multi-core CPU hardware, the proposed algorithm was easy to achieve a high computational efficiently for the organ localizations on a whole body CT scan by using general-purpose computers. We applied this approach to localization of 12 kinds of major organ regions independently on 1,300 torso CT scans. In our experiments, we randomly selected 300 CT scans (with human indicated organ and tissue locations) for training, and then, applied the proposed approach with the training results to localize each of the target regions on the other 1,000 CT scans for the performance testing. The experimental results showed the possibility of the proposed approach to automatically locate different kinds of organs on the whole body CT scans.

  4. Ultra-wide-band 3D microwave imaging scanner for the detection of concealed weapons

    NASA Astrophysics Data System (ADS)

    Rezgui, Nacer-Ddine; Andrews, David A.; Bowring, Nicholas J.

    2015-10-01

    The threat of concealed weapons, explosives and contraband in footwear, bags and suitcases has led to the development of new devices, which can be deployed for security screening. To address known deficiencies of metal detectors and x-rays, an UWB 3D microwave imaging scanning apparatus using FMCW stepped frequency working in the K and Q bands and with a planar scanning geometry based on an x y stage, has been developed to screen suspicious luggage and footwear. To obtain microwave images of the concealed weapons, the targets are placed above the platform and the single transceiver horn antenna attached to the x y stage is moved mechanically to perform a raster scan to create a 2D synthetic aperture array. The S11 reflection signal of the transmitted sweep frequency from the target is acquired by a VNA in synchronism with each position step. To enhance and filter from clutter and noise the raw data and to obtain the 2D and 3D microwave images of the concealed weapons or explosives, data processing techniques are applied to the acquired signals. These techniques include background subtraction, Inverse Fast Fourier Transform (IFFT), thresholding, filtering by gating and windowing and deconvolving with the transfer function of the system using a reference target. To focus the 3D reconstructed microwave image of the target in range and across the x y aperture without using focusing elements, 3D Synthetic Aperture Radar (SAR) techniques are applied to the post-processed data. The K and Q bands, between 15 to 40 GHz, show good transmission through clothing and dielectric materials found in luggage and footwear. A description of the system, algorithms and some results with replica guns and a comparison of microwave images obtained by IFFT, 2D and 3D SAR techniques are presented.

  5. Increased sensitivity of 3D-Well enzyme-linked immunosorbent assay (ELISA) for infectious disease detection using 3D-printing fabrication technology.

    PubMed

    Singh, Harpal; Shimojima, Masayuki; Fukushi, Shuetsu; Le Van, An; Sugamata, Masami; Yang, Ming

    2015-01-01

    Enzyme-linked Immunosorbent Assay or ELISA -based diagnostics are considered the gold standard in the demonstration of various immunological reaction including in the measurement of antibody response to infectious diseases and to support pathogen identification with application potential in infectious disease outbreaks and individual patients' treatment and clinical care. The rapid prototyping of ELISA-based diagnostics using available 3D printing technologies provides an opportunity for a further exploration of this platform into immunodetection systems. In this study, a '3D-Well' was designed and fabricated using available 3D printing platforms to have an increased surface area of more than 4 times for protein-surface adsorption compared to those of 96-well plates. The ease and rapidity in designing-product development-feedback cycle offered through 3D printing platforms provided an opportunity for its rapid assessment, in which a chemical etching process was used to make the surface hydrophilic followed by validation through the diagnostic performance of ELISA for infectious disease without modifying current laboratory practices for ELISA. The higher sensitivity of the 3D-Well (3-folds higher) compared to the 96-well ELISA provides a potential for the expansion of this technology towards miniaturization platforms to reduce time, volume of reagents and samples needed for laboratory or field diagnosis of infectious diseases including applications in other disciplines.

  6. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  7. Nodule Detection in a Lung Region that's Segmented with Using Genetic Cellular Neural Networks and 3D Template Matching with Fuzzy Rule Based Thresholding

    PubMed Central

    Osman, Onur; Ucan, Osman N.

    2008-01-01

    Objective The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Materials and Methods Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. Results The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Conclusion Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer-aided detection of lung nodules. PMID:18253070

  8. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  9. Automatic feature detection for 3D surface reconstruction from HDTV endoscopic videos

    NASA Astrophysics Data System (ADS)

    Groch, Anja; Baumhauer, Matthias; Meinzer, Hans-Peter; Maier-Hein, Lena

    2010-02-01

    A growing number of applications in the field of computer-assisted laparoscopic interventions depend on accurate and fast 3D surface acquisition. The most commonly applied methods for 3D reconstruction of organ surfaces from 2D endoscopic images involve establishment of correspondences in image pairs to allow for computation of 3D point coordinates via triangulation. The popular feature-based approach for correspondence search applies a feature descriptor to compute high-dimensional feature vectors describing the characteristics of selected image points. Correspondences are established between image points with similar feature vectors. In a previous study, the performance of a large set of state-of-the art descriptors for the use in minimally invasive surgery was assessed. However, standard Phase Alternating Line (PAL) endoscopic images were utilized for this purpose. In this paper, we apply some of the best performing feature descriptors to in-vivo PAL endoscopic images as well as to High Definition Television (HDTV) endoscopic images of the same scene and show that the quality of the correspondences can be increased significantly when using high resolution images.

  10. Flow integration transform: detecting shapes in matrix-array 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Stetten, George D.; Caines, Michael; von Ramm, Olaf T.

    1995-03-01

    Matrix-array ultrasound produces real-time 3D images of the heart, by employing a square array of transducers to steer the ultrasound beam in three dimensions electronically with no moving parts. Other 3D modalities such as MR, MUGA, and CT require the use of gated studies, which combine many cardiac cycles to produce a single average cycle. Three- dimensional ultrasound eliminates this restriction, in theory permitting the continuous measurement of cardiac ventricular volume, which we call the volumetricardiogram. Towards implementing the volumetricardiogram, we have developed the flow integration transform (FIT), which operates on a 2D slice within the volumetric ultrasound data. The 3D ultrasound machine's scan converter produces a set of such slices in real time, at any desired location and orientation, to which the FIT may then be applied. Although lacking rotational or scale invariance, the FIT is designed to operate in dedicated hardware where an entire transform could be completed within a few microseconds with present integrated circuit technology. This speed would permit the application of a large battery of test shapes, or the evolution of the test shape to converge on that of the actual target.

  11. Pore detection in Computed Tomography (CT) soil 3D images using singularity map analysis

    NASA Astrophysics Data System (ADS)

    Sotoca, Juan J. Martin; Tarquis, Ana M.; Saa Requejo, Antonio; Grau, Juan B.

    2016-04-01

    X-ray Computed Tomography (CT) images have significantly helped the study of the internal soil structure. This technique has two main advantages: 1) it is a non-invasive technique, i.e., it doesńt modify the internal soil structure, and 2) it provides a good resolution. The major disadvantage is that these images are sometimes low-contrast in the solid/pore interface. One of the main problems in analyzing soil structure through CT images is to segment them in solid/pore space. To do so, we have different segmentation techniques at our disposal that are mainly based on thresholding methods in which global or local thresholds are calculated to separate pore space from solid space. The aim of this presentation is to develop the fractal approach to soil structure using "singularity maps" and the "Concentration-Area (CA) method". We will establish an analogy between mineralization processes in ore deposits and morphogenesis processes in soils. Resulting from this analogy a new 3D segmentation method is proposed, the "3D Singularity-CA" method. A comparison with traditional 3D segmentation methods will be performed to show the main differences among them.

  12. Pre-impact fall detection system using dynamic threshold and 3D bounding box

    NASA Astrophysics Data System (ADS)

    Otanasap, Nuth; Boonbrahm, Poonpong

    2017-02-01

    Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.

  13. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  14. Nickel/cobalt oxide-decorated 3D graphene nanocomposite electrode for enhanced electrochemical detection of urea.

    PubMed

    Nguyen, Nhi Sa; Das, Gautam; Yoon, Hyon Hee

    2016-03-15

    A NiCo2O4 bimetallic electro-catalyst was synthesized on three-dimensional graphene (3D graphene) for the non-enzymatic detection of urea. The structural and morphological properties of the NiCo2O4/3D graphene nanocomposite were characterized by X-ray diffraction, Raman spectroscopy, and scanning electron microscopy. The NiCo2O4/3D graphene was deposited on an indium tin oxide (ITO) glass to fabricate a highly sensitive urea sensor. The electrochemical properties of the prepared electrode were studied by cyclic voltammetry. A high sensitivity of 166 μAmM(-)(1)cm(-)(2) was obtained for the NiCo2O4/3D graphene/ITO sensor. The sensor exhibited a linear range of 0.06-0.30 mM (R(2)=0.998) and a fast response time of approximately 1.0 s with a detection limit of 5.0 µM. Additionally, the sensor exhibited high stability with a sensitivity decrease of only 5.5% after four months of storage in ambient conditions. The urea sensor demonstrates feasibility for urea analysis in urine samples.

  15. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  16. SAR Object Change Detection Study.

    DTIC Science & Technology

    1980-03-01

    based techniques when applied to Synthetic Aperature Radar (SAR imagery. DOUGLA 3. PRASKA, 2LT, USAF Project Engineer viii Section 1 INTRODUCTION AND...to assess the applicability of three region-based change-detection methods to synthetic aperture radar imagery. I/ Ac .0ion For K:CTAB [ ft i . i...Section 2, the algorithms developed were applied to synthetic -aperture radar image data furnished by RADC. Some preprocessing of all images was required

  17. A method of 3D reconstruction via ISAR Sequences based on scattering centers association for space rigid object

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zou, Jiangwei; Xu, Shiyou; Tian, Biao; Chen, Zengping

    2014-10-01

    In this paper the effects of orbits motion makes for scattering centers trajectory is analyzed, and introduced to scattering centers association, as a constraint. A screening method of feature points is presented to analysis the false points of reconstructed result, and the wrong association which lead these false points. The loop iteration between 3D reconstruction and association result makes the precision of final reconstructed result have a further improvement. The simulation data shows the validity of the algorithm.

  18. Multi-Frame Object Detection

    DTIC Science & Technology

    2012-09-01

    was originally developed by Intel, but is now sup- ported by Willow Garage and Itseez [13]. The library includes many applications including facial...apply two- dimensional features to each frame of a multi-frame sample in the same locations and sum the results, as shown in the following equation and...Dataset We created a synthetic dataset that depicts a circular object undergoing a diagonal back-and- forth motion. 200 sequences, each spanning three

  19. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    PubMed

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work.

  20. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  1. Detection of ancient morphology and potential hydrocarbon traps using 3-D seismic data and attribute analysis

    SciTech Connect

    Heggland, R.

    1995-12-31

    This paper presents the use of seismic attributes on 3D data to reveal Tertiary and Cretaceous geological features in Norwegian block 9/2. Some of the features would hardly be possible to map using only 2D seismic data. The method which involves a precise interpretation of horizons, attribute analysis and manipulation of colour displays, may be useful when studying morphology, faults and hydrocarbon traps. The interval of interest in this study was from 0 to 1.5 s TWT. Horizontal displays (timeslices and attribute maps), seemed to highlight very nicely geological features such as shallow channels, fractures, karst topography and faults. The attributes used for mapping these features were amplitude, total reflection energy (a volume or time interval attribute), dip and azimuth. The choice of colour scale and manipulation of colour displays were also critical for the results. The data examples clearly demonstrate how it is possible to achieve a very detailed mapping of geological features using 3D seismic data and attribute analysis. The results of this study were useful for the understanding of hydrocarbon migration paths and hydrocarbon traps.

  2. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  3. 3D Mapping of plasma effective areas via detection of cancer cell damage induced by atmospheric pressure plasma jets

    NASA Astrophysics Data System (ADS)

    Han, Xu; Liu, Yueing; Stack, M. Sharon; Ptasinska, Sylwia

    2014-12-01

    In the present study, a nitrogen atmospheric pressure plasma jet (APPJ) was used for irradiation of oral cancer cells. Since cancer cells are very susceptible to plasma treatment, they can be used as a tool for detection of APPJ-effective areas, which extended much further than the visible part of the APPJ. An immunofluorescence assay was used for DNA damage identification, visualization and quantification. Thus, the effective damage area and damage level were determined and plotted as 3D images.

  4. Mii School: New 3D Technologies Applied in Education to Detect Drug Abuses and Bullying in Adolescents

    NASA Astrophysics Data System (ADS)

    Carmona, José Alberto; Espínola, Moisés; Cangas, Adolfo J.; Iribarne, Luis

    Mii School is a 3D school simulator developed with Blender and used by psychology researchers for the detection of drugs abuses, bullying and mental disorders in adolescents. The school simulator created is an interactive video game where the players, in this case the students, have to choose, along 17 scenes simulated, the options that better define their personalities. In this paper we present a technical characteristics description and the first results obtained in a real school.

  5. Dynamic 3D MR Visualization and Detection of Upper Airway Obstruction during Sleep using Region Growing Segmentation

    PubMed Central

    Kim, Yoon-Chul; Khoo, Michael C.K.; Davidson Ward, Sally L.; Nayak, Krishna S.

    2016-01-01

    Goal We demonstrate a novel and robust approach for visualization of upper airway dynamics and detection of obstructive events from dynamic 3D magnetic resonance imaging (MRI) scans of the pharyngeal airway. Methods This approach uses 3D region growing, where the operator selects a region of interest that includes the pharyngeal airway, places two seeds in the patent airway, and determines a threshold for the first frame. Results This approach required 5 sec/frame of CPU time compared to 10 min/frame of operator time for manual segmentation. It compared well with manual segmentation, resulting in Dice Coefficients of 0.84 to 0.94, whereas the Dice Coefficients for two manual segmentations by the same observer were 0.89 to 0.97. It was also able to automatically detect 83% of collapse events. Conclusion Use of this simple semi-automated segmentation approach improves the workflow of novel dynamic MRI studies of the pharyngeal airway and enables visualization and detection of obstructive events. Significance Obstructive sleep apnea is a significant public health issue affecting 4-9% of adults and 2% of children. Recently, 3D dynamic MRI of the upper airway has been demonstrated during natural sleep, with sufficient spatio-temporal resolution to non-invasively study patterns of airway obstruction in young adults with OSA. This work makes it practical to analyze these long scans and visualize important factors in an MRI sleep study, such as the time, site, and extent of airway collapse. PMID:26258929

  6. Optical full-depth refocusing of 3-D objects based on subdivided-elemental images and local periodic δ-functions in integral imaging.

    PubMed

    Ai, Ling-Yu; Dong, Xiao-Bin; Jang, Jae-Young; Kim, Eun-Soo

    2016-05-16

    We propose a new approach for optical refocusing of three-dimensional (3-D) objects on their real depth without a pickup-range limitation based on subdivided-elemental image arrays (sub-EIAs) and local periodic δ-function arrays (L-PDFAs). The captured EIA from the 3-D objects locating out of the pickup-range, is divided into a number of sub-EIAs depending on the object distance from the lens array. Then, by convolving these sub-EIAs with each L-PDFA whose spatial period corresponds to the specific object's depth, as well as whose size is matched to that of the sub-EIA, arrays of spatially-filtered sub-EIAs (SF-sub-EIAs) for each object depth can be uniquely extracted. From these arrays of SF-sub-EIAs, 3-D objects can be optically reconstructed to be refocused on their real depth. Operational principle of the proposed method is analyzed based on ray-optics. In addition, to confirm the feasibility of the proposed method in the practical application, experiments with test objects are carried out and the results are comparatively discussed with those of the conventional method.

  7. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  8. Low-Cost 3D Printers Enable High-Quality and Automated Sample Preparation and Molecular Detection.

    PubMed

    Chan, Kamfai; Coen, Mauricio; Hardick, Justin; Gaydos, Charlotte A; Wong, Kah-Yat; Smith, Clayton; Wilson, Scott A; Vayugundla, Siva Praneeth; Wong, Season

    2016-01-01

    Most molecular diagnostic assays require upfront sample preparation steps to isolate the target's nucleic acids, followed by its amplification and detection using various nucleic acid amplification techniques. Because molecular diagnostic methods are generally rather difficult to perform manually without highly trained users, automated and integrated systems are highly desirable but too costly for use at point-of-care or low-resource settings. Here, we showcase the development of a low-cost and rapid nucleic acid isolation and amplification platform by modifying entry-level 3D printers that cost between $400 and $750. Our modifications consisted of replacing the extruder with a tip-comb attachment that houses magnets to conduct magnetic particle-based nucleic acid extraction. We then programmed the 3D printer to conduct motions that can perform high-quality extraction protocols. Up to 12 samples can be processed simultaneously in under 13 minutes and the efficiency of nucleic acid isolation matches well against gold-standard spin-column-based extraction technology. Additionally, we used the 3D printer's heated bed to supply heat to perform water bath-based polymerase chain reactions (PCRs). Using another attachment to hold PCR tubes, the 3D printer was programmed to automate the process of shuttling PCR tubes between water baths. By eliminating the temperature ramping needed in most commercial thermal cyclers, the run time of a 35-cycle PCR protocol was shortened by 33%. This article demonstrates that for applications in resource-limited settings, expensive nucleic acid extraction devices and thermal cyclers that are used in many central laboratories can be potentially replaced by a device modified from inexpensive entry-level 3D printers.

  9. Low-Cost 3D Printers Enable High-Quality and Automated Sample Preparation and Molecular Detection

    PubMed Central

    Chan, Kamfai; Coen, Mauricio; Hardick, Justin; Gaydos, Charlotte A.; Wong, Kah-Yat; Smith, Clayton; Wilson, Scott A.; Vayugundla, Siva Praneeth; Wong, Season

    2016-01-01

    Most molecular diagnostic assays require upfront sample preparation steps to isolate the target’s nucleic acids, followed by its amplification and detection using various nucleic acid amplification techniques. Because molecular diagnostic methods are generally rather difficult to perform manually without highly trained users, automated and integrated systems are highly desirable but too costly for use at point-of-care or low-resource settings. Here, we showcase the development of a low-cost and rapid nucleic acid isolation and amplification platform by modifying entry-level 3D printers that cost between $400 and $750. Our modifications consisted of replacing the extruder with a tip-comb attachment that houses magnets to conduct magnetic particle-based nucleic acid extraction. We then programmed the 3D printer to conduct motions that can perform high-quality extraction protocols. Up to 12 samples can be processed simultaneously in under 13 minutes and the efficiency of nucleic acid isolation matches well against gold-standard spin-column-based extraction technology. Additionally, we used the 3D printer’s heated bed to supply heat to perform water bath-based polymerase chain reactions (PCRs). Using another attachment to hold PCR tubes, the 3D printer was programmed to automate the process of shuttling PCR tubes between water baths. By eliminating the temperature ramping needed in most commercial thermal cyclers, the run time of a 35-cycle PCR protocol was shortened by 33%. This article demonstrates that for applications in resource-limited settings, expensive nucleic acid extraction devices and thermal cyclers that are used in many central laboratories can be potentially replaced by a device modified from inexpensive entry-level 3D printers. PMID:27362424

  10. 3D Detection, Quantification and Correlation of Slope Failures with Geologic Structure in the Mont Blanc massif

    NASA Astrophysics Data System (ADS)

    Allan, Mark; Dunning, Stuart; Lim, Michael; Woodward, John

    2016-04-01

    A thorough understanding of supply from landslides and knowledge of their spatial distribution is of fundamental importance to high-mountain sediment budgets. Advances in 3D data acquisition techniques are heralding new opportunities to create high-resolution topographic models to aid our understanding of landscape change through time. In this study, we use a Structure-from-Motion Multi-View Stereo (SfM-MVS) approach to detect and quantify slope failures at selected sites in the Mont Blanc massif. Past and present glaciations along with its topographical characteristics have resulted in a high rate of geomorphological activity within the range. Data for SfM-MVS processing were captured across variable temporal scales to examine short-term (daily), seasonal and annual change from terrestrial, Unmanned Aerial Vehicle (UAV) and helicopter perspectives. Variable spatial scales were also examined ranging from small focussed slopes (~0.01 km2) to large valley-scale surveys (~3 km2). Alignment and registration were conducted using a series of Ground Control Points (GCPs) across the surveyed slope at various heights and slope aspects. GCPs were also used to optimise data and reduce non-linear distortions. 3D differencing was performed using a multiscale model-to-model comparison algorithm (M3C2) which uses variable thresholding across each slope based on local surface roughness and model alignment quality. Detected change was correlated with local slope structure and 3D discontinuity analysis was undertaken using a plane-detection and clustering approach (DSE). Computation of joint spacing was performed using the classified data and normal distances. Structural analysis allowed us to assign a Slope Mass Rating (SMR) and assess the stability of each slope relative to the detected change and determine likely failure modes. We demonstrate an entirely 3D workflow which preserves the complexity of alpine slope topography to compute volumetric loss using a variable threshold. A

  11. 3D parallel-detection microwave tomography for clinical breast imaging

    NASA Astrophysics Data System (ADS)

    Epstein, N. R.; Meaney, P. M.; Paulsen, K. D.

    2014-12-01

    A biomedical microwave tomography system with 3D-imaging capabilities has been constructed and translated to the clinic. Updates to the hardware and reconfiguration of the electronic-network layouts in a more compartmentalized construct have streamlined system packaging. Upgrades to the data acquisition and microwave components have increased data-acquisition speeds and improved system performance. By incorporating analog-to-digital boards that accommodate the linear amplification and dynamic-range coverage our system requires, a complete set of data (for a fixed array position at a single frequency) is now acquired in 5.8 s. Replacement of key components (e.g., switches and power dividers) by devices with improved operational bandwidths has enhanced system response over a wider frequency range. High-integrity, low-power signals are routinely measured down to -130 dBm for frequencies ranging from 500 to 2300 MHz. Adequate inter-channel isolation has been maintained, and a dynamic range >110 dB has been achieved for the full operating frequency range (500-2900 MHz). For our primary band of interest, the associated measurement deviations are less than 0.33% and 0.5° for signal amplitude and phase values, respectively. A modified monopole antenna array (composed of two interwoven eight-element sub-arrays), in conjunction with an updated motion-control system capable of independently moving the sub-arrays to various in-plane and cross-plane positions within the illumination chamber, has been configured in the new design for full volumetric data acquisition. Signal-to-noise ratios (SNRs) are more than adequate for all transmit/receive antenna pairs over the full frequency range and for the variety of in-plane and cross-plane configurations. For proximal receivers, in-plane SNRs greater than 80 dB are observed up to 2900 MHz, while cross-plane SNRs greater than 80 dB are seen for 6 cm sub-array spacing (for frequencies up to 1500 MHz). We demonstrate accurate recovery

  12. 3D parallel-detection microwave tomography for clinical breast imaging.

    PubMed

    Epstein, N R; Meaney, P M; Paulsen, K D

    2014-12-01

    A biomedical microwave tomography system with 3D-imaging capabilities has been constructed and translated to the clinic. Updates to the hardware and reconfiguration of the electronic-network layouts in a more compartmentalized construct have streamlined system packaging. Upgrades to the data acquisition and microwave components have increased data-acquisition speeds and improved system performance. By incorporating analog-to-digital boards that accommodate the linear amplification and dynamic-range coverage our system requires, a complete set of data (for a fixed array position at a single frequency) is now acquired in 5.8 s. Replacement of key components (e.g., switches and power dividers) by devices with improved operational bandwidths has enhanced system response over a wider frequency range. High-integrity, low-power signals are routinely measured down to -130 dBm for frequencies ranging from 500 to 2300 MHz. Adequate inter-channel isolation has been maintained, and a dynamic range >110 dB has been achieved for the full operating frequency range (500-2900 MHz). For our primary band of interest, the associated measurement deviations are less than 0.33% and 0.5° for signal amplitude and phase values, respectively. A modified monopole antenna array (composed of two interwoven eight-element sub-arrays), in conjunction with an updated motion-control system capable of independently moving the sub-arrays to various in-plane and cross-plane positions within the illumination chamber, has been configured in the new design for full volumetric data acquisition. Signal-to-noise ratios (SNRs) are more than adequate for all transmit/receive antenna pairs over the full frequency range and for the variety of in-plane and cross-plane configurations. For proximal receivers, in-plane SNRs greater than 80 dB are observed up to 2900 MHz, while cross-plane SNRs greater than 80 dB are seen for 6 cm sub-array spacing (for frequencies up to 1500 MHz). We demonstrate accurate recovery

  13. 3D parallel-detection microwave tomography for clinical breast imaging

    SciTech Connect

    Epstein, N. R.; Meaney, P. M.; Paulsen, K. D.

    2014-12-15

    A biomedical microwave tomography system with 3D-imaging capabilities has been constructed and translated to the clinic. Updates to the hardware and reconfiguration of the electronic-network layouts in a more compartmentalized construct have streamlined system packaging. Upgrades to the data acquisition and microwave components have increased data-acquisition speeds and improved system performance. By incorporating analog-to-digital boards that accommodate the linear amplification and dynamic-range coverage our system requires, a complete set of data (for a fixed array position at a single frequency) is now acquired in 5.8 s. Replacement of key components (e.g., switches and power dividers) by devices with improved operational bandwidths has enhanced system response over a wider frequency range. High-integrity, low-power signals are routinely measured down to −130 dBm for frequencies ranging from 500 to 2300 MHz. Adequate inter-channel isolation has been maintained, and a dynamic range >110 dB has been achieved for the full operating frequency range (500–2900 MHz). For our primary band of interest, the associated measurement deviations are less than 0.33% and 0.5° for signal amplitude and phase values, respectively. A modified monopole antenna array (composed of two interwoven eight-element sub-arrays), in conjunction with an updated motion-control system capable of independently moving the sub-arrays to various in-plane and cross-plane positions within the illumination chamber, has been configured in the new design for full volumetric data acquisition. Signal-to-noise ratios (SNRs) are more than adequate for all transmit/receive antenna pairs over the full frequency range and for the variety of in-plane and cross-plane configurations. For proximal receivers, in-plane SNRs greater than 80 dB are observed up to 2900 MHz, while cross-plane SNRs greater than 80 dB are seen for 6 cm sub-array spacing (for frequencies up to 1500 MHz). We demonstrate accurate

  14. 3D parallel-detection microwave tomography for clinical breast imaging

    PubMed Central

    Meaney, P. M.; Paulsen, K. D.

    2014-01-01

    A biomedical microwave tomography system with 3D-imaging capabilities has been constructed and translated to the clinic. Updates to the hardware and reconfiguration of the electronic-network layouts in a more compartmentalized construct have streamlined system packaging. Upgrades to the data acquisition and microwave components have increased data-acquisition speeds and improved system performance. By incorporating analog-to-digital boards that accommodate the linear amplification and dynamic-range coverage our system requires, a complete set of data (for a fixed array position at a single frequency) is now acquired in 5.8 s. Replacement of key components (e.g., switches and power dividers) by devices with improved operational bandwidths has enhanced system response over a wider frequency range. High-integrity, low-power signals are routinely measured down to −130 dBm for frequencies ranging from 500 to 2300 MHz. Adequate inter-channel isolation has been maintained, and a dynamic range >110 dB has been achieved for the full operating frequency range (500–2900 MHz). For our primary band of interest, the associated measurement deviations are less than 0.33% and 0.5° for signal amplitude and phase values, respectively. A modified monopole antenna array (composed of two interwoven eight-element sub-arrays), in conjunction with an updated motion-control system capable of independently moving the sub-arrays to various in-plane and cross-plane positions within the illumination chamber, has been configured in the new design for full volumetric data acquisition. Signal-to-noise ratios (SNRs) are more than adequate for all transmit/receive antenna pairs over the full frequency range and for the variety of in-plane and cross-plane configurations. For proximal receivers, in-plane SNRs greater than 80 dB are observed up to 2900 MHz, while cross-plane SNRs greater than 80 dB are seen for 6 cm sub-array spacing (for frequencies up to 1500 MHz). We demonstrate accurate

  15. Use of 3-D magnetic resonance electrical impedance tomography in detecting human cerebral stroke: a simulation study*

    PubMed Central

    Gao, Nuo; Zhu, Shan-an; He, Bin

    2005-01-01

    We have developed a new three dimensional (3-D) conductivity imaging approach and have used it to detect human brain conductivity changes corresponding to acute cerebral stroke. The proposed Magnetic Resonance Electrical Impedance Tomography (MREIT) approach is based on the J-Substitution algorithm and is expanded to imaging 3-D subject conductivity distribution changes. Computer simulation studies have been conducted to evaluate the present MREIT imaging approach. Simulations of both types of cerebral stroke, hemorrhagic stroke and ischemic stroke, were performed on a four-sphere head model. Simulation results showed that the correlation coefficient (CC) and relative error (RE) between target and estimated conductivity distributions were 0.9245±0.0068 and 8.9997%±0.0084%, for hemorrhagic stroke, and 0.6748±0.0197 and 8.8986%±0.0089%, for ischemic stroke, when the SNR (signal-to-noise radio) of added GWN (Gaussian White Noise) was 40. The convergence characteristic was also evaluated according to the changes of CC and RE with different iteration numbers. The CC increases and RE decreases monotonously with the increasing number of iterations. The present simulation results show the feasibility of the proposed 3-D MREIT approach in hemorrhagic and ischemic stroke detection and suggest that the method may become a useful alternative in clinical diagnosis of acute cerebral stroke in humans. PMID:15822161

  16. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  17. 3D metal-organic framework as highly efficient biosensing platform for ultrasensitive and rapid detection of bisphenol A.

    PubMed

    Wang, Xue; Lu, Xianbo; Wu, Lidong; Chen, Jiping

    2015-03-15

    As is well known, bisphenol A (BPA), usually exists in daily plastic products, is one of the most important endocrine disrupting chemicals. In this work, copper-centered metal-organic framework (Cu-MOF) was synthesized, which was characterized by SEM, TEM, XRD, FTIR and electrochemical method. The resultant Cu-MOF was explored as a robust electrochemical biosensing platform by choosing tyrosinase (Tyr) as a model enzyme for ultrasensitive and rapid detection of BPA. The Cu-MOF provided a 3D structure with a large specific surface area, which was beneficial for enzyme and BPA absorption, and thus improved the sensitivity of the biosensor. Furthermore, Cu-MOF as a novel sorbent could increase the available BPA concentration to react with tyrosinase through π-π stacking interactions between BPA and Cu-MOF. The Tyr biosensor exhibited a high sensitivity of 0.2242A M(-1) for BPA, a wide linear range from 5.0×10(-8) to 3.0×10-6moll(-1), and a low detection limit of 13nmoll(-1). The response time for detection of BPA is less than 11s. The proposed method was successfully applied to rapid and selective detection of BPA in plastic products with satisfactory results. The recoveries are in the range of 94.0-101.6% for practical applications. With those remarkable advantages, MOFs-based 3D structures show great prospect as robust biosensing platform for ultrasensitive and rapid detection of BPA.

  18. Azo-Based Iridium(III) Complexes as Multicolor Phosphorescent Probes to Detect Hypoxia in 3D Multicellular Tumor Spheroids

    NASA Astrophysics Data System (ADS)

    Sun, Lingli; Li, Guanying; Chen, Xiang; Chen, Yu; Jin, Chengzhi; Ji, Liangnian; Chao, Hui

    2015-10-01

    Hypoxia is an important characteristic of malignant solid tumors and is considered as a possible causative factor for serious resistance to chemo- and radiotherapy. The exploration of novel fluorescent probes capable of detecting hypoxia in solid tumors will aid tumor diagnosis and treatment. In this study, we reported the design and synthesis of a series of “off-on” phosphorescence probes for hypoxia detection in adherent and three-dimensional multicellular spheroid models. All of the iridium(III) complexes incorporate an azo group as an azo-reductase reactive moiety to detect hypoxia. Reduction of non-phosphorescent probes Ir1-Ir8 by reductases under hypoxic conditions resulted in the generation of highly phosphorescent corresponding amines for detection of hypoxic regions. Moreover, these probes can penetrate into 3D multicellular spheroids over 100 μm and image the hypoxic regions. Most importantly, these probes display a high selectivity for the detection of hypoxia in 2D cells and 3D multicellular spheroids.

  19. Combining a wavelet transform with a channelized Hotelling observer for tumor detection in 3D PET oncology imaging

    NASA Astrophysics Data System (ADS)

    Lartizien, Carole; Tomei, Sandrine; Maxim, Voichita; Odet, Christophe

    2007-03-01

    This study evaluates new observer models for 3D whole-body Positron Emission Tomography (PET) imaging based on a wavelet sub-band decomposition and compares them with the classical constant-Q CHO model. Our final goal is to develop an original method that performs guided detection of abnormal activity foci in PET oncology imaging based on these new observer models. This computer-aided diagnostic method would highly benefit to clinicians for diagnostic purpose and to biologists for massive screening of rodents populations in molecular imaging. Method: We have previously shown good correlation of the channelized Hotelling observer (CHO) using a constant-Q model with human observer performance for 3D PET oncology imaging. We propose an alternate method based on combining a CHO observer with a wavelet sub-band decomposition of the image and we compare it to the standard CHO implementation. This method performs an undecimated transform using a biorthogonal B-spline 4/4 wavelet basis to extract the features set for input to the Hotelling observer. This work is based on simulated 3D PET images of an extended MCAT phantom with randomly located lesions. We compare three evaluation criteria: classification performance using the signal-to-noise ratio (SNR), computation efficiency and visual quality of the derived 3D maps of the decision variable λ. The SNR is estimated on a series of test images for a variable number of training images for both observers. Results: Results show that the maximum SNR is higher with the constant-Q CHO observer, especially for targets located in the liver, and that it is reached with a smaller number of training images. However, preliminary analysis indicates that the visual quality of the 3D maps of the decision variable λ is higher with the wavelet-based CHO and the computation time to derive a 3D λ-map is about 350 times shorter than for the standard CHO. This suggests that the wavelet-CHO observer is a good candidate for use in our guided

  20. Longitudinal correlation of 3D OCT to detect early stage erosion in bovine enamel.

    PubMed

    Aden, Abdirahman; Anderson, Paul; Burnett, Gary R; Lynch, Richard J M; Tomlins, Peter H

    2017-02-01

    Erosive tissue-loss in dental enamel is of significant clinical concern because the net loss of enamel is irreversible, however, initial erosion is reversible. Micro-hardness testing is a standard method for measuring initial erosion, but its invasive nature has led to the investigation of alternative measurement techniques. Optical coherence tomography (OCT) is an attractive alternative because of its ability to non-invasively image three-dimensional volumes. In this study, a four-dimensional OCT system is used to longitudinally measure bovine enamel undergoing a continuous erosive challenge. A new method of analyzing 3D OCT volumes is introduced that compares intensity projections of the specimen surface by calculating the slope of a linear regression line between corresponding pixel intensities and the associated correlation coefficient. The OCT correlation measurements are compared to micro-hardness data and found to exhibit a linear relationship. The results show that this method is a sensitive technique for the investigation of the formation of early stage erosive lesions.

  1. Longitudinal correlation of 3D OCT to detect early stage erosion in bovine enamel

    PubMed Central

    Aden, Abdirahman; Anderson, Paul; Burnett, Gary R.; Lynch, Richard J. M.; Tomlins, Peter H.

    2017-01-01

    Erosive tissue-loss in dental enamel is of significant clinical concern because the net loss of enamel is irreversible, however, initial erosion is reversible. Micro-hardness testing is a standard method for measuring initial erosion, but its invasive nature has led to the investigation of alternative measurement techniques. Optical coherence tomography (OCT) is an attractive alternative because of its ability to non-invasively image three-dimensional volumes. In this study, a four-dimensional OCT system is used to longitudinally measure bovine enamel undergoing a continuous erosive challenge. A new method of analyzing 3D OCT volumes is introduced that compares intensity projections of the specimen surface by calculating the slope of a linear regression line between corresponding pixel intensities and the associated correlation coefficient. The OCT correlation measurements are compared to micro-hardness data and found to exhibit a linear relationship. The results show that this method is a sensitive technique for the investigation of the formation of early stage erosive lesions. PMID:28270996

  2. Detecting ground moving objects using panoramic system

    NASA Astrophysics Data System (ADS)

    Xu, Fuyuan; Gu, Guohua; Wang, Jing

    2015-09-01

    The moving objects detection is an essential issue in many computer vision and video processing tasks. In this paper, a detecting moving objects method using a panoramic system is proposed. It can detect ground moving objects when the camera is rotated, so it can be called the moving objects detection in rotation (MODIR). The detection area and flexible of the panoramic system are be enhanced by MODIR. The background and moving objects are moving in image when the camera is rotated. Compare with the traditional methods, the aim of MODIR is to segment the isolated entities out according to the motions in the video whether imaging platform is moving or not. Firstly, the corresponding relations between the images captured from two different views is deduced from the multi-view geometric. The moving objects and stationary background in the images are distinguished by this corresponding relations. Secondly, the moving object detection framework base on multi-frame is established. This detection framework can reduce the impacts of the image matching error and cumulative error on the moving objects detection. In the experiment, an evaluation metrics method is used to compare the performance of MODIR with the traditional methods. And a lot of videos captured by the panoramic system are processed by MODIR to demonstrate its good performance in practice.

  3. Improved moving object detection and tracking method

    NASA Astrophysics Data System (ADS)

    Li, Zhanli; Yang, Fang; Li, Hong-an

    2016-07-01

    At present, the detection and tracking for video moving object have been used in many fields. Aimed at the limitation of the traditional adjacent frame difference method, this paper presented the three-frame difference method to detect objects. For moving object tracking, this paper proposed a method combining Kalman filter and Mean-Shift, and used the prediction function of Kalman filter to overcome the defect of Mean-Shift in selecting the initial position of the candidate object. Experimental results showed that the detection and tracking method proposed in this paper are simple, precise, and perform a good result.

  4. A robust and efficient approach to detect 3D rectal tubes from CT colonography

    SciTech Connect

    Yang Xiaoyun; Slabaugh, Greg

    2011-11-15

    Purpose: The rectal tube (RT) is a common source of false positives (FPs) in computer-aided detection (CAD) systems for CT colonography. A robust and efficient detection of RT can improve CAD performance by eliminating such ''obvious'' FPs and increase radiologists' confidence in CAD. Methods: In this paper, we present a novel and robust bottom-up approach to detect the RT. Probabilistic models, trained using kernel density estimation on simple low-level features, are employed to rank and select the most likely RT tube candidate on each axial slice. Then, a shape model, robustly estimated using random sample consensus (RANSAC), infers the global RT path from the selected local detections. Subimages around the RT path are projected into a subspace formed from training subimages of the RT. A quadratic discriminant analysis (QDA) provides a classification of a subimage as RT or non-RT based on the projection. Finally, a bottom-top clustering method is proposed to merge the classification predictions together to locate the tip position of the RT. Results: Our method is validated using a diverse database, including data from five hospitals. On a testing data with 21 patients (42 volumes), 99.5% of annotated RT paths have been successfully detected. Evaluated with CAD, 98.4% of FPs caused by the RT have been detected and removed without any loss of sensitivity. Conclusions: The proposed method demonstrates a high detection rate of the RT path, and when tested in a CAD system, reduces FPs caused by the RT without the loss of sensitivity.

  5. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  6. A surface-based 3-D dendritic spine detection approach from confocal microscopy images.

    PubMed

    Li, Qing; Deng, Zhigang

    2012-03-01

    Determining the relationship between the dendritic spine morphology and its functional properties is a fundamental challenge in neurobiology research. In particular, how to accurately and automatically analyse meaningful structural information from a large microscopy image data set is far away from being resolved. As pointed out in existing literature, one remaining challenge in spine detection and segmentation is how to automatically separate touching spines. In this paper, based on various global and local geometric features of the dendrite structure, we propose a novel approach to detect and segment neuronal spines, in particular, a breaking-down and stitching-up algorithm to accurately separate touching spines. Extensive performance comparisons show that our approach is more accurate and robust than two state-of-the-art spine detection and segmentation algorithms.

  7. Buried object detection in GPR images

    DOEpatents

    Paglieroni, David W; Chambers, David H; Bond, Steven W; Beer, W. Reginald

    2014-04-29

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  8. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  9. GPR Detection and 3D Mapping of Lateral Macropores II. Riparian Application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The morphology and prevalence of 1-10 cm diameter macropores in forested riparian wetland buffers is largely unknown despite their importance as a source of preferential nutrient delivery to stream channels. Here, we validated in situ procedures for detecting and mapping the three-dimensional struct...

  10. A robust automated method to detect stent struts in 3D intravascular optical coherence tomographic image sequences

    NASA Astrophysics Data System (ADS)

    Wang, A.; Eggermont, J.; Dekker, N.; Garcia-Garcia, H. M.; Pawar, R.; Reiber, J. H. C.; Dijkstra, J.

    2012-03-01

    Intravascular optical coherence tomography (IVOCT) provides very high resolution cross-sectional image sequences of vessels. It has been rapidly accepted for stent implantation and its follow up evaluation. Given the large amount of stent struts in a single image sequence, only automated detection methods are feasible. In this paper, we present an automated stent strut detection technique which requires neither lumen nor vessel wall segmentation. To detect strut-pixel candidates, both global intensity histograms and local intensity profiles of the raw polar images are used. Gaussian smoothing is applied followed by specified Prewitt compass filters to detect the trailing shadow of each strut. The shadow edge positions assist the strut-pixel candidates clustering. In the end, a 3D guide wire filter is applied to remove the guide wire from the detection results. For validation, two experts marked 6738 struts in 1021 frames in 10 IVOCT image sequences from a one-year follow up study. The struts were labeled as malapposed, apposed or covered together with the image quality (high, medium, low). The inter-observer agreement was 96%. The algorithm was validated for different combinations of strut status and image quality. Compared to the manual results, 93% of the struts were correctly detected by the new method. For each combination, the lowest accuracy was 88%, which shows the robustness towards different situations. The presented method can detect struts automatically regardless of the strut status or the image quality, which can be used for quantitative measurement, 3D reconstruction and visualization of the implanted stents.

  11. 3D element imaging using NSECT for the detection of renal cancer: a simulation study in MCNP

    NASA Astrophysics Data System (ADS)

    Viana, R. S.; Agasthya, G. A.; Yoriyaz, H.; Kapadia, A. J.

    2013-09-01

    This work describes a simulation study investigating the application of neutron stimulated emission computed tomography (NSECT) for noninvasive 3D imaging of renal cancer in vivo. Using MCNP5 simulations, we describe a method of diagnosing renal cancer in the body by mapping the 3D distribution of elements present in tumors using the NSECT technique. A human phantom containing the kidneys and other major organs was modeled in MCNP5. The element composition of each organ was based on values reported in literature. The two kidneys were modeled to contain elements reported in renal cell carcinoma (RCC) and healthy kidney tissue. Simulated NSECT scans were executed to determine the 3D element distribution of the phantom body. Elements specific to RCC and healthy kidney tissue were then analyzed to identify the locations of the diseased and healthy kidneys and generate tomographic images of the tumor. The extent of the RCC lesion inside the kidney was determined using 3D volume rendering. A similar procedure was used to generate images of each individual organ in the body. Six isotopes were studied in this work—32S, 12C, 23Na, 14N, 31P and 39K. The results demonstrated that through a single NSECT scan performed in vivo, it is possible to identify the location of the kidneys and other organs within the body, determine the extent of the tumor within the organ, and to quantify the differences between cancer and healthy tissue-related isotopes with p ≤ 0.05. All of the images demonstrated appropriate concentration changes between the organs, with some discrepancy observed in 31P, 39K and 23Na. The discrepancies were likely due to the low concentration of the elements in the tissue that were below the current detection sensitivity of the NSECT technique.

  12. Optometric Measurements Predict Performance but not Comfort on a Virtual Object Placement Task with a Stereoscopic 3D Display

    DTIC Science & Technology

    2014-09-16

    environment, depth perception 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 18. NUMBER OF PAGES 29 19a. NAME OF...virtual environment, depth perception 1 Distribution A: Approved for public release; distribution unlimited. 88ABW Cleared 9/9/2013; 88ABW...precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the Simulator Sickness

  13. Development and validation of a 64 channel front end ASIC for 3D directional detection for MIMAC

    NASA Astrophysics Data System (ADS)

    Richer, J. P.; Bourrion, O.; Bosson, G.; Guillaudin, O.; Mayet, F.; Santos, D.

    2011-11-01

    A front end ASIC has been designed to equip the \\textmu TPC prototype developed for the MIMAC project, which requires 3D reconstruction of low energy particle tracks in order to perform directional detection of galactic Dark Matter. Each ASIC is able to monitor 64 strips of pixels and provides the "Time Over Threshold" information for each of those. These 64 digital informations, sampled at a rate of 50 MHz, can be transferred at 400 MHz by eight LVDS serial links. Eight ASIC were validated on a 2 × 256 strips of pixels prototype.

  14. Image detection of inner wall surface of holes in metal sheets through polarization using a 3D TV monitor

    NASA Astrophysics Data System (ADS)

    Suzuki, Takamasa; Nakano, Katsunori; Muramatsu, Shogo; Oitate, Toshiro

    2012-11-01

    We propose an effective technique for optically detecting images of the inner hole-surface of a hole (hereafter, referred to as the hole-surface) using the polarization property of a 3D television (TV) monitor. The polarized light emitted by the TV monitor illuminates the hole-surfaces present in the test target placed on the screen of the monitor. When the polarizer placed in front of a camera lens is adjusted such that the camera captures a dark image for the transmitted light, only the highlighted hole-surfaces are visible in the captured image.

  15. Interpretation of Magnetic Anomalies in Salihli (Turkey) Geothermal Area Using 3-D Inversion and Edge Detection Techniques

    NASA Astrophysics Data System (ADS)

    Timur, Emre

    2016-04-01

    There are numerous geophysical methods used to investigate geothermal areas. The major purpose of this magnetic survey is to locate the boudaries of active hydrothermal system in the South of Gediz Graben in Salihli (Manisa/Turkey). The presence of the hydrothermal system had already been inferred from surface evidence of hydrothermal activity and drillings. Firstly, 3-D prismatic models were theoretically investigated and edge detection methods were utilized with an iterative inversion method to define the boundaries and the parameters of the structure. In the first step of the application, it was necessary to convert the total field anomaly into a pseudo-gravity anomaly map. Then the geometric boudaries of the structures were determined by applying a MATLAB based software with 3 different edge detection algorithms. The exact location of the structures were obtained by using these boundary coordinates as initial geometric parameters in the inversion process. In addition to these methods, reduction to pole and horizontal gradient methods were applied to the data to achieve more information about the location and shape of the possible reservoir. As a result, the edge detection methods were found to be successful, both in the field and as theoretical data sets for delineating the boundaries of the possible geothermal reservoir structure. The depth of the geothermal reservoir was determined as 2,4 km from 3-D inversion and 2,1 km from power spectrum methods.

  16. 3D photonic crystal-based biosensor functionalized with quantum dot-based aptamer for thrombine detection

    NASA Astrophysics Data System (ADS)

    Lim, Chae Young; Choi, Eunpyo; Park, Youngkyu; Park, Jungyul

    2013-05-01

    In this paper, we propose a new technique for protein detection by using the enhancement of intensity in quantum dots (Qdot) whose emission is guided by 3D photonic crystal (PC) structures. For easy to use, we design the emitted light from the sensor can be recovered, when the chemical antibody (aptamer) conjugated with guard DNA (g-DNA) labeled with a quencher (Black FQ) hybridizes with the target proteins. In detail, we synthesis a Qdot-aptamer complex and then immobilize these complex on the PC surfaces. Next, we perform the hybridization of the Qdot-aptamer complex with g-DNA labeled with the quencher. It induces the quenching effect of fluoresce intensity in the Qdot-aptamer. In presence of target protein (thrombin), the Qdot-aptamer complex prefers to form the thrombin-aptamer complex: this results in the release of Black FQ-g-DNA and the quenched light intensity recovers into the original high intensity with Qdot. The intensity recovery varies quantitatively according to the level of the target protein concentration. This proposed sensor shows much higher detection sensitivity than the general fluorescent detection mechanism, which is functionalized on the flat surfaces because of the light guiding effect from 3D photonic crystal structures.

  17. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  18. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning.

  19. Target detect system in 3D using vision apply on plant reproduction by tissue culture

    NASA Astrophysics Data System (ADS)

    Vazquez Rueda, Martin G.; Hahn, Federico

    2001-03-01

    This paper presents the preliminary results for a system in tree dimension that use a system vision to manipulate plants in a tissue culture process. The system is able to estimate the position of the plant in the work area, first calculate the position and send information to the mechanical system, and recalculate the position again, and if it is necessary, repositioning the mechanical system, using an neural system to improve the location of the plant. The system use only the system vision to sense the position and control loop using a neural system to detect the target and positioning the mechanical system, the results are compared with an open loop system.

  20. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  1. Detecting small anatomical change with 3D serial MR subtraction images

    NASA Astrophysics Data System (ADS)

    Holden, Mark; Denton, Erica R. E.; Jarosz, J. M.; Cox, T. C.; Studholme, Colin; Hawkes, David J.; Hill, Derek L.

    1999-05-01

    Spoiled gradient echo volume MR scans were obtained from 5 growth hormone (GH) patients and 6 normal controls. The patients were scanned before treatment and after 3 and 6 months of GH therapy. The controls were scanned at similar intervals. A calibration phantom was scanned on the same day as each subject. The phantom images were registered with a 9 degree of freedom algorithm to measure scaling errors due to changes in scanner calibration. The second and third images were each registered with a 6 degree of freedom algorithm to the first (baseline) image by maximizing normalized mutual information, and transformed, with and without scaling error correction, using sinc interpolation. Each registered and transformed image had the baseline image subtracted to generate a difference image. Two neuro-radiologists were trained to detect structural change with difference images containing synthetic misregistration and scale changes. They carried out a blinded assessment of anatomical change for the unregistered; aligned and subtracted; and scale corrected, aligned and subtracted images. The results show a significant improvement in the detection of structural change and inter-observer agreement when aligned and subtracted images were used instead of unregistered ones. The structural change corresponded to an increase in brain: CSF ratio.

  2. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  3. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  4. 3D-Modeling of deformed halite hopper crystals: Object based image analysis and support vector machine, a first evaluation

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-05-01

    Halite hopper crystals are thought to develop by displacive growth in unconsolidated mud (Gornitz & Schreiber, 1984). The Alpine Haselgebirge, but also e.g. the salt deposits of the Rhine graben (mined at the beginning of the 20th century), comprise hopper crystals with shapes of cuboids, parallelepipeds and rhombohedrons (Görgey, 1912). Obviously, they deformed under oriented stress, which had been tried to reconstruct with respect to the sedimentary layering (Leitner et al., 2013). In the present work, deformed halite hopper crystals embedded in mudrock were automated reconstructed. Object based image analysis (OBIA) has been used successfully in remote sensing for 2D images before. The present study represents the first time that the method was used for reconstruction of three dimensional geological objects. First, manually a reference (gold standard) was created by redrawing contours of the halite crystals on each HRXCT scanning slice. Then, for OBIA, the computer program eCognition was used. For the automated reconstruction a rule set was developed. Thereby, the strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. In a second step, all the objects unsuitable for a structural deformation analysis were dismissed using a support vector machine (SVM) (clusters, polyhalite-coated crystals and spherical halites) The SVM simultaneously drastically reduced the number of halites. From 184 OBIA-objects 67 well shaped remained, which comes close to the number of pre-selected 52 objects. To assess the accuracy of the automated reconstruction, the result before and after SVM was compared to the reference, i.e. the gold standard. State-of the art per-scene statistics were extended to a per-object statistics. Görgey R (1912) Zur Kenntnis der Kalisalzlager von Wittelsheim im Ober-Elsaß. Tschermaks Mineral Petrogr Mitt 31:339-468 Gornitz VM, Schreiber BC (1981) Displacive halite hoppers from the dead sea

  5. Generating object proposals for improved object detection in aerial images

    NASA Astrophysics Data System (ADS)

    Sommer, Lars W.; Schuchert, Tobias; Beyerer, Jürgen

    2016-10-01

    Screening of aerial images covering large areas is important for many applications such as surveillance, tracing or rescue tasks. To reduce the workload of image analysts, an automatic detection of candidate objects is required. In general, object detection is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, the huge number of windows to classify, especially in case of multiple object scales, makes these approaches computationally expensive. To overcome this challenge, we reduce the number of candidate windows by generating so called object proposals. Object proposals are a set of candidate regions in an image that are likely to contain an object. We apply the Selective Search approach that has been broadly used as proposals method for detectors like R-CNN or Fast R-CNN. Therefore, a set of small regions is generated by initial segmentation followed by hierarchical grouping of the initial regions to generate proposals at different scales. To reduce the computational costs of the original approach, which consists of 80 combinations of segmentation settings and grouping strategies, we only apply the most appropriate combination. Therefore, we analyze the impact of varying segmentation settings, different merging strategies, and various colour spaces by calculating the recall with regard to the number of object proposals and the intersection over union between generated proposals and ground truth annotations. As aerial images differ considerably from datasets that are typically used for exploring object proposals methods, in particular in object size and the image fraction occupied by an object, we further adapt the Selective Search algorithm to aerial images by replacing the random order of generated proposals by a weighted order based on the object proposal size and integrate a termination criterion for the merging strategies. Finally, the adapted approach is compared to the original Selective Search algorithm

  6. Detection of patient's bed statuses in 3D using a Microsoft Kinect.

    PubMed

    Li, Yun; Berkowitz, Lyle; Noskin, Gary; Mehrotra, Sanjay

    2014-01-01

    Patients spend the vast majority of their hospital stay in an unmonitored bed where various mobility factors can impact patient safety and quality. Specifically, bed positioning and a patient's related mobility in that bed can have a profound impact on risks such as pneumonias, blood clots, bed ulcers and falls. This issue has been exacerbated as the nurse-per-bed (NPB) ratio has decreased in recent years. To help assess these risks, it is critical to monitor a hospital bed's positional status (BPS). Two bed positional statuses, bed height (BH) and bed chair angle (BCA), are of critical interests for bed monitoring. In this paper, we develop a bed positional status detection system using a single Microsoft Kinect. Experimental results show that we are able to achieve 94.5% and 93.0% overall accuracy of the estimated BCA and BH in a simulated patient's room environment.

  7. Investigation on viewing direction dependent detectability in a reconstructed 3D volume for a cone beam CT system

    NASA Astrophysics Data System (ADS)

    Park, Junhan; Lee, Changwoo; Baek, Jongduk

    2015-03-01

    In medical imaging systems, several factors (e.g., reconstruction algorithm, noise structures, target size, contrast, etc) affect the detection performance and need to be considered for object detection. In a cone beam CT system, FDK reconstruction produces different noise structures in axial and coronal slices, and thus we analyzed directional dependent detectability of objects using detection SNR of Channelized Hotelling observer. To calculate the detection SNR, difference-of-Gaussian channel model with 10 channels was implemented, and 20 sphere objects with different radius (i.e., 0.25 (mm) to 5 (mm) equally spaced by 0.25 (mm)), reconstructed by FDK algorithm, were used as object templates. Covariance matrix in axial and coronal direction was estimated from 3000 reconstructed noise volumes, and then the SNR ratio between axial and coronal direction was calculated. Corresponding 2D noise power spectrum was also calculated. The results show that as the object size increases, the SNR ratio decreases, especially lower than 1 when the object size is larger than 2.5 mm radius. The reason is because the axial (coronal) noise power is higher in high (low) frequency band, and therefore the detectability of a small (large) object is higher in coronal (axial) images. Our results indicate that it is more beneficial to use coronal slices in order to improve the detectability of a small object in a cone beam CT system.

  8. CAD scheme for detection of intracranial aneurysms in MRA based on 3D analysis of vessel skeletons and enhanced aneurysms

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Li, Qiang; Korogi, Yukunori; Hirai, Toshinori; Yamashita, Yasuyuki; Katsuragawa, Shigehiko; Ikeda, Ryuji; Doi, Kunio

    2005-04-01

    We have developed a computer-aided diagnostic (CAD) scheme for detection of unruptured intracranial aneurysms in magnetic resonance angiography (MRA) based on findings of short branches in vessel skeletons, and a three-dimensional (3D) selective enhancement filter for dots (aneurysms). Fifty-three cases with 61 unruptured aneurysms and 62 non-aneurysm cases were tested in this study. The isotropic 3D MRA images with 400 x 400 x 128 voxels (a voxel size of 0.5 mm) were processed by use of the dot enhancement filter. The initial candidates were identified not only on the dot-enhanced images by use of a multiple gray-level thresholding technique, but also on the vessel skeletons by finding short branches on parent skeletons, which can indicate a high likelihood of small aneurysms. All candidates were classified into four categories of candidates according to effective diameter and local structure of the vessel skeleton. In each category, a number of false positives were removed by use of two rule-based schemes and by linear discriminant analysis on localized image features related to gray level and morphology. Our CAD scheme achieved a sensitivity of 97% with 5.0 false positives per patient by use of a leave-one-out-by-patient test method. This CAD system may be useful in assisting radiologists in the detection of small intracranial aneurysms as well as medium-size aneurysms in MRA.

  9. High-efficiency microarray of 3-D carbon MEMS electrodes for pathogen detection systems

    NASA Astrophysics Data System (ADS)

    Kassegne, Sam; Wondimu, Berhanu; Majzoub, Mohammad; Shin, Jiae

    2008-11-01

    Molecular diagnostic applications for pathogen detections require the ability to separate pathogens such as bacteria, viruses, etc., from a biological sample of blood or saliva. Over the past several years, conventional two-dimensional active microarrays have been used with success for the manipulation of biomolecules including DNA. However, they have a major drawback of inability to process relatively 'largevolume' samples useful in infectious disease diagnostics applications. This paper presents an active microarray of three-dimensional carbon electrodes that exploits electrokinetic forces for transport, accumulation, and hybridization of charged bio-molecules with an added advantage of large volume capability. Tall 3-dimensional carbon microelectrode posts are fabricated using C-MEMS (Carbon MEMS) technology that is emerging as a very exciting research area since carbon has fascinating physical, chemical, mechanical and electrical properties in addition to its low cost. The chip fabricated using CMEMS technology is packaged and its efficiency of separation and accumulation of charged particle established by manipulating negatively charged polycarboxylate 2 μm beads in 50 mM histidine buffer.

  10. Applications of neural networks to landmark detection in 3-D surface data

    NASA Astrophysics Data System (ADS)

    Arndt, Craig M.

    1992-09-01

    The problem of identifying key landmarks in 3-dimensional surface data is of considerable interest in solving a number of difficult real-world tasks, including object recognition and image processing. The specific problem that we address in this research is to identify the specific landmarks (anatomical) in human surface data. This is a complex task, currently performed visually by an expert human operator. In order to replace these human operators and increase reliability of the data acquisition, we need to develop a computer algorithm which will utilize the interrelations between the 3-dimensional data to identify the landmarks of interest. The current presentation describes a method for designing, implementing, training, and testing a custom architecture neural network which will perform the landmark identification task. We discuss the performance of the net in relationship to human performance on the same task and how this net has been integrated with other AI and traditional programming methods to produce a powerful analysis tool for computer anthropometry.

  11. 3D multi-object segmentation of cardiac MSCT imaging by using a multi-agent approach.

    PubMed

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernández, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed.

  12. 3D Multi-Object Segmentation of Cardiac MSCT Imaging by using a Multi-Agent Approach

    PubMed Central

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernandez, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed. PMID:18003382

  13. Computer-aided laser-optoelectronic OPTEL 3D measurement systems of complex-shaped object geometry

    NASA Astrophysics Data System (ADS)

    Galiulin, Ravil M.; Galiulin, Rishat M.; Bakirov, J. M.; Bogdanov, D. R.; Shulupin, C. O.; Khamitov, D. H.; Khabibullin, M. G.; Pavlov, A. F.; Ryabov, M. S.; Yamaliev, K. N.

    1996-03-01

    Technical characteristics, advantages and applications of automated optoelectronic measuring systems designed at the Regional Interuniversity Optoelectronic Systems Laboratory ('OPTEL') of Ufa State Aviation Technical University are given. The suggested range of systems is the result of the long-term scientific and research experiments, work on design and introduction work. The system can be applied in industrial development and research, in the field of high precision measurement of geometrical parameters in aerospace, robotic, etc., where non-contact and fast measurements of complicated shape objects made of various materials including brittle and plastic articles are required.

  14. A 3-D ultrasound imaging robotic system to detect and quantify lower limb arterial stenoses: in vivo feasibility.

    PubMed

    Janvier, Marie-Ange; Merouche, Samir; Allard, Louise; Soulez, Gilles; Cloutier, Guy

    2014-01-01

    The degree of stenosis is the most common criterion used to assess the severity of lower limb peripheral arterial disease. Two-dimensional ultrasound (US) imaging is the first-line diagnostic method for investigating lesions, but it cannot render a 3-D map of the entire lower limb vascular tree required for therapy planning. We propose a prototype 3-D US imaging robotic system that can potentially reconstruct arteries from the iliac in the lower abdomen down to the popliteal behind the knee. A realistic multi-modal vascular phantom was first conceptualized to evaluate the system's performance. Geometric accuracies were assessed in surface reconstruction and cross-sectional area in comparison to computed tomography angiography (CTA). A mean surface map error of 0.55 mm was recorded for 3-D US vessel representations, and cross-sectional lumen areas were congruent with CTA geometry. In the phantom study, stenotic lesions were properly localized and severe stenoses up to 98.3% were evaluated with -3.6 to 11.8% errors. The feasibility of the in vivo system in reconstructing the normal femoral artery segment of a volunteer and detecting stenoses on a femoral segment of a patient was also investigated and compared with that of CTA. Together, these results encourage future developments to increase the robot's potential to adequately represent lower limb vessels and clinically evaluate stenotic lesions for therapy planning and recurrent non-invasive and non-ionizing follow-up examinations.

  15. Detection of latent fingerprints using high-resolution 3D confocal microscopy in non-planar acquisition scenarios

    NASA Astrophysics Data System (ADS)

    Kirst, Stefan; Vielhauer, Claus

    2015-03-01

    In digitized forensics the support of investigators in any manner is one of the main goals. Using conservative lifting methods, the detection of traces is done manually. For non-destructive contactless methods, the necessity for detecting traces is obvious for further biometric analysis. High resolutional 3D confocal laser scanning microscopy (CLSM) grants the possibility for a detection by segmentation approach with improved detection results. Optimal scan results with CLSM are achieved on surfaces orthogonal to the sensor, which is not always possible due to environmental circumstances or the surface's shape. This introduces additional noise, outliers and a lack of contrast, making a detection of traces even harder. Prior work showed the possibility of determining angle-independent classification models for the detection of latent fingerprints (LFP). Enhancing this approach, we introduce a larger feature space containing a variety of statistical-, roughness-, color-, edge-directivity-, histogram-, Gabor-, gradient- and Tamura features based on raw data and gray-level co-occurrence matrices (GLCM) using high resolutional data. Our test set consists of eight different surfaces for the detection of LFP in four different acquisition angles with a total of 1920 single scans. For each surface and angles in steps of 10, we capture samples from five donors to introduce variance by a variety of sweat compositions and application influences such as pressure or differences in ridge thickness. By analyzing the present test set with our approach, we intend to determine angle- and substrate-dependent classification models to determine optimal surface specific acquisition setups and also classification models for a general detection purpose for both, angles and substrates. The results on overall models with classification rates up to 75.15% (kappa 0.50) already show a positive tendency regarding the usability of the proposed methods for LFP detection on varying surfaces in non

  16. a Uav Based 3-D Positioning Framework for Detecting Locations of Buried Persons in Collapsed Disaster Area

    NASA Astrophysics Data System (ADS)

    Moon, H.; Kim, C.; Lee, W.

    2016-06-01

    Regarding spatial location positioning, indoor location positioning theories based on wireless communication techniques such as Wi-Fi, beacon, UWB and Bluetooth has widely been developing across the world. These techniques are mainly focusing on spatial location detection of customers using fixed wireless APs and unique Tags in the indoor environment. Besides, since existing detection equipment and techniques using ultrasound or sound etc. to detect buried persons and identify survival status for them cause 2nd damages on the collapsed debris for rescuers. In addition, it might take time to check the buried persons. However, the collapsed disaster sites should consider both outdoor and indoor environments because empty spaces under collapsed debris exists. In order to detect buried persons from the empty spaces, we should collect wireless signals with Wi-Fi from their mobile phone. Basically, the Wi-Fi signal measure 2-D location. However, since the buried persons have Z value with burial depth, we also should collect barometer sensor data from their mobile phones in order to measure Z values according to weather conditions. Specially, for quick accessibility to the disaster area, a drone (UAV; Unmanned Arial Vehicle) system, which is equipped with a wireless detection module, was introduced. Using these framework, this study aims to provide the rescuers with effective rescue information by calculating 3-D location for buried persons based on the wireless and barometer sensor fusion.

  17. A Model-Based 3D Template Matching Technique for Pose Acquisition of an Uncooperative Space Object

    PubMed Central

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-01-01

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented. PMID:25785309

  18. A model-based 3D template matching technique for pose acquisition of an uncooperative space object.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-03-16

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.

  19. Automated detection of retinal cell nuclei in 3D micro-CT images of zebrafish using support vector machine classification

    NASA Astrophysics Data System (ADS)

    Ding, Yifu; Tavolara, Thomas; Cheng, Keith

    2016-03-01

    Our group is developing a method to examine biological specimens in cellular detail using synchrotron microCT. The method can acquire 3D images of tissue at micrometer-scale resolutions, allowing for individual cell types to be visualized in the context of the entire specimen. For model organism research, this tool will enable the rapid characterization of tissue architecture and cellular morphology from every organ system. This characterization is critical for proposed and ongoing "phenome" projects that aim to phenotype whole-organism mutants and diseased tissues from different organisms including humans. With the envisioned collection of hundreds to thousands of images for a phenome project, it is important to develop quantitative image analysis tools for the automated scoring of organism phenotypes across organ systems. Here we present a first step towards that goal, demonstrating the use of support vector machines (SVM) in detecting retinal cell nuclei in 3D images of wild-type zebrafish. In addition, we apply the SVM classifier on a mutant zebrafish to examine whether SVMs can be used to capture phenotypic differences in these images. The longterm goal of this work is to allow cellular and tissue morphology to be characterized quantitatively for many organ systems, at the level of the whole-organism.

  20. Fabrication of photometric dip-strip test systems for detection of beta(1-->3)-D-glucan using crude beta(1-->3)-D-glucanase from sprouts of Vigna aconitifolia.

    PubMed

    Bagal-Kestwal, Dipali; Kestwal, Rakesh Mohan; Chiang, Been Huang

    2009-04-15

    Efforts have been made to fabricate enzyme dip-strip test systems for detecting beta(1-->3)-D-glucan. Beta(1-->3)-D-glucanase from sprouts of Vigna aconitifolia (commonly known as moth bean, 8-day old) with high specific activity (244 U mg(-1)) was co-entrapped with glucose oxidase (GOD) in different combinations of composite polymer matrices of agarose (A), gelatin (G), polyvinyl alcohol (PVA) and corn flour (CF). The enzyme immobilized membranes were checked for immobilization yield, pH and temperature optima, swelling index, thermal, operational and storage stability, and morphology by scanning electron microscopy. The 3% A-2% CF-8% G composite matrix was chosen for fabricating enzyme dip-strip systems for detection of beta-glucan by spectrophotometer using DNSA method (System-I) and AAP method (System-II). Dip-strip System-I and II showed linear dynamic range for detecting glucan concentration ranged from 100 to 500 microg mL(-1) and 10 to 50 microg mL(-1) with contact time 10 and 5 min, respectively. The LOD of System-I and II were found to be 65 microg mL(-1) and 10 microg mL(-1), respectively. Hence System-II was employed for analyzing beta(1-->3)-D-glucan contents in various pharmaceutical samples. It was found that without any sample pre-treatment the percent error of detection was less than 5.

  1. Object-based 3D geomodel with multiple constraints for early Pliocene fan delta in the south of Lake Albert Basin, Uganda

    NASA Astrophysics Data System (ADS)

    Wei, Xu; Lei, Fang; Xinye, Zhang; Pengfei, Wang; Xiaoli, Yang; Xipu, Yang; Jun, Liu

    2017-01-01

    The early Pliocene fan delta complex developed in the south of Lake Albert Basin which is located at the northern end of the western branch in the East African Rift System. The stratigraphy of this succession is composed of distributary channels, overbank, mouthbar and lacustrine shales. Limited by the poor seismic quality and few wells, it is full of challenge to delineate the distribution area and patterns of reservoir sands. Sedimentary forward simulation and basin analogue were applied to analyze the spatial distribution of facies configuration and then a conceptual sedimentary model was constructed by combining with core, heavy mineral and palynology evidences. A 3D geological model of a 120 m thick stratigraphic succession was built using well logs and seismic surfaces based on the established sedimentary model. The facies modeling followed a hierarchical object-based approach conditioned to multiple trend constraints like channel intensity, channel azimuth and channel width. Lacustrine shales were modeled as background facies and then in turn eroded by distribute channels, overbank and mouthbar respectively. At the same time a body facies parameter was created to indicate the connectivity of the reservoir sands. The resultant 3D facies distributions showed that the distributary channels flowed from east bounding fault to west flank and overbank was adhered to the fringe of channels while mouthbar located at the end of channels. Furthermore, porosity and permeability were modeled using sequential Gaussian simulation (SGS) honoring core observations and petrophysical interpretation results. Despite the poor seismic is not supported to give enough information for fan delta sand distribution, creating a truly representative 3D geomodel is still able to be achieved. This paper highlights the integration of various data and comprehensive steps of building a consistent representative 3D geocellular fan delta model used for numeral simulation studies and field

  2. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  3. A 3D microfluidic chip for electrochemical detection of hydrolysed nucleic bases by a modified glassy carbon electrode.

    PubMed

    Vlachova, Jana; Tmejova, Katerina; Kopel, Pavel; Korabik, Maria; Zitka, Jan; Hynek, David; Kynicky, Jindrich; Adam, Vojtech; Kizek, Rene

    2015-01-22

    Modification of carbon materials, especially graphene-based materials, has wide applications in electrochemical detection such as electrochemical lab-on-chip devices. A glassy carbon electrode (GCE) modified with chemically alternated graphene oxide was used as a working electrode (glassy carbon modified by graphene oxide with sulphur containing compounds and Nafion) for detection of nucleobases in hydrolysed samples (HCl pH = 2.9, 100 °C, 1 h, neutralization by NaOH). It was found out that modification, especially with trithiocyanuric acid, increased the sensitivity of detection in comparison with pure GCE. All processes were finally implemented in a microfluidic chip formed with a 3D printer by fused deposition modelling technology. As a material for chip fabrication, acrylonitrile butadiene styrene was chosen because of its mechanical and chemical stability. The chip contained the one chamber for the hydrolysis of the nucleic acid and another for the electrochemical detection by the modified GCE. This chamber was fabricated to allow for replacement of the GCE.

  4. A 3D Microfluidic Chip for Electrochemical Detection of Hydrolysed Nucleic Bases by a Modified Glassy Carbon Electrode

    PubMed Central

    Vlachova, Jana; Tmejova, Katerina; Kopel, Pavel; Korabik, Maria; Zitka, Jan; Hynek, David; Kynicky, Jindrich; Adam, Vojtech; Kizek, Rene

    2015-01-01

    Modification of carbon materials, especially graphene-based materials, has wide applications in electrochemical detection such as electrochemical lab-on-chip devices. A glassy carbon electrode (GCE) modified with chemically alternated graphene oxide was used as a working electrode (glassy carbon modified by graphene oxide with sulphur containing compounds and Nafion) for detection of nucleobases in hydrolysed samples (HCl pH = 2.9, 100 °C, 1 h, neutralization by NaOH). It was found out that modification, especially with trithiocyanuric acid, increased the sensitivity of detection in comparison with pure GCE. All processes were finally implemented in a microfluidic chip formed with a 3D printer by fused deposition modelling technology. As a material for chip fabrication, acrylonitrile butadiene styrene was chosen because of its mechanical and chemical stability. The chip contained the one chamber for the hydrolysis of the nucleic acid and another for the electrochemical detection by the modified GCE. This chamber was fabricated to allow for replacement of the GCE. PMID:25621613

  5. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  6. Detectability of Oort Cloud Objects Using Kepler

    NASA Astrophysics Data System (ADS)

    Ofek, Eran O.; Nakar, Ehud

    2010-03-01

    The size distribution and total mass of objects in the Oort Cloud have important implications to the theory of planet formation, including the properties of, and the processes taking place in the early solar system. We discuss the potential of space missions, such as Kepler and CoRoT, designed to discover transiting exoplanets, to detect Oort Cloud, Kuiper Belt, and main belt objects by occultations of background stars. Relying on published dynamical estimates of the content of the Oort Cloud, we find that Kepler's main program is expected to detect between 0 and ~100 occultation events by deca-kilometer-sized Oort Cloud objects. The occultation rate depends on the mass of the Oort Cloud, the distance to its "inner edge," and the size distribution of its objects. In contrast, Kepler is unlikely to find occultations by Kuiper Belt or main belt asteroids, mainly due to the fact that it is observing a high ecliptic latitude field. Occultations by solar system objects will appear as a photometric deviation in a single measurement, implying that the information regarding the timescale and light-curve shape of each event is lost. We present statistical methods that have the potential to verify the authenticity of occultation events by solar system objects, to estimate the distance to the occulting population, and to constrain their size distribution. Our results are useful for planning of future space-based exoplanet searches in a way that will maximize the probability of detecting solar system objects, without hampering the main science goals.

  7. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of

  8. About Non-Line-Of-Sight Satellite Detection and Exclusion in a 3D Map-Aided Localization Algorithm

    PubMed Central

    Peyraud, Sébastien; Bétaille, David; Renault, Stéphane; Ortiz, Miguel; Mougel, Florian; Meizel, Dominique; Peyret, François

    2013-01-01

    Reliable GPS positioning in city environment is a key issue actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the main topic of the present article. A virtual image processing that detects and eliminates possible faulty measurements is the core of this method. This image is generated using the position estimated a priori by the navigation process itself, under road constraints. This position is then updated by measurements to line-of-sight satellites only. This closed-loop real-time processing has shown very first promising full-scale test results. PMID:23344379

  9. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  10. Multimodal photoacoustic and optical coherence tomography scanner using an all optical detection scheme for 3D morphological skin imaging.

    PubMed

    Zhang, Edward Z; Povazay, Boris; Laufer, Jan; Alex, Aneesh; Hofer, Bernd; Pedley, Barbara; Glittenberg, Carl; Treeby, Bradley; Cox, Ben; Beard, Paul; Drexler, Wolfgang

    2011-08-01

    A noninvasive, multimodal photoacoustic and optical coherence tomography (PAT/OCT) scanner for three-dimensional in vivo (3D) skin imaging is described. The system employs an integrated, all optical detection scheme for both modalities in backward mode utilizing a shared 2D optical scanner with a field-of-view of ~13 × 13 mm(2). The photoacoustic waves were detected using a Fabry Perot polymer film ultrasound sensor placed on the surface of the skin. The sensor is transparent in the spectral range 590-1200 nm. This permits the photoacoustic excitation beam (670-680 nm) and the OCT probe beam (1050 nm) to be transmitted through the sensor head and into the underlying tissue thus providing a backward mode imaging configuration. The respective OCT and PAT axial resolutions were 8 and 20 µm and the lateral resolutions were 18 and 50-100 µm. The system provides greater penetration depth than previous combined PA/OCT devices due to the longer wavelength of the OCT beam (1050 nm rather than 829-870 nm) and by operating in the tomographic rather than the optical resolution mode of photoacoustic imaging. Three-dimensional in vivo images of the vasculature and the surrounding tissue micro-morphology in murine and human skin were acquired. These studies demonstrated the complementary contrast and tissue information provided by each modality for high-resolution 3D imaging of vascular structures to depths of up to 5 mm. Potential applications include characterizing skin conditions such as tumors, vascular lesions, soft tissue damage such as burns and wounds, inflammatory conditions such as dermatitis and other superficial tissue abnormalities.

  11. Breast cancers detected in only one of two arms of a tomosynthesis (3D-mammography) population screening trial (STORM-2).

    PubMed

    Bernardi, Daniela; Houssami, Nehmat

    2017-04-01

    The prospective 'screening with tomosynthesis or standard mammography-2 (STORM-2)' trial compared mammography screen-reading strategies and showed that each of integrated 2D/3D-mammography or 2Dsynthetic/3D-mammography detected significantly more breast cancers than 2D-mammography alone. This short report describes 13 (from 90) cancers detected in only one of two parallel double-reading arms implemented in STORM-2. Amongst this subset of cases, the majority was invasive cancer ≤16 mm, mostly depicted as irregular masses or distortions. Furthermore, most were detected at 3D-mammography only and predominantly by one reader from double-reading pairs, highlighting that 3D-mammography may enable detection of cancers that are challenging to perceive at routine screening.

  12. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes

    PubMed Central

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-01-01

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656

  13. Contour based object detection using part bundles

    PubMed Central

    Lu, ChengEn; Adluru, Nagesh; Ling, Haibin; Zhu, Guangxi; Latecki, Longin Jan

    2016-01-01

    In this paper we propose a novel framework for contour based object detection from cluttered environments. Given a contour model for a class of objects, it is first decomposed into fragments hierarchically. Then, we group these fragments into part bundles, where a part bundle can contain overlapping fragments. Given a new image with set of edge fragments we develop an efficient voting method using local shape similarity between part bundles and edge fragments that generates high quality candidate part configurations. We then use global shape similarity between the part configurations and the model contour to find optimal configuration. Furthermore, we show that appearance information can be used for improving detection for objects with distinctive texture when model contour does not sufficiently capture deformation of the objects.

  14. Diffusion Background Model for Moving Objects Detection

    NASA Astrophysics Data System (ADS)

    Vishnyakov, B. V.; Sidyakin, S. V.; Vizilter, Y. V.

    2015-05-01

    In this paper, we propose a new approach for moving objects detection in video surveillance systems. It is based on construction of the regression diffusion maps for the image sequence. This approach is completely different from the state of the art approaches. We show that the motion analysis method, based on diffusion maps, allows objects that move with different speed or even stop for a short while to be uniformly detected. We show that proposed model is comparable to the most popular modern background models. We also show several ways of speeding up diffusion maps algorithm itself.

  15. Automatic detection of lung nodules in CT datasets based on stable 3D mass-spring models.

    PubMed

    Cascio, D; Magro, R; Fauci, F; Iacomi, M; Raso, G

    2012-11-01

    We propose a computer-aided detection (CAD) system which can detect small-sized (from 3mm) pulmonary nodules in spiral CT scans. A pulmonary nodule is a small lesion in the lungs, round-shaped (parenchymal nodule) or worm-shaped (juxtapleural nodule). Both kinds of lesions have a radio-density greater than lung parenchyma, thus appearing white on the images. Lung nodules might indicate a lung cancer and their early stage detection arguably improves the patient survival rate. CT is considered to be the most accurate imaging modality for nodule detection. However, the large amount of data per examination makes the full analysis difficult, leading to omission of nodules by the radiologist. We developed an advanced computerized method for the automatic detection of internal and juxtapleural nodules on low-dose and thin-slice lung CT scan. This method consists of an initial selection of nodule candidates list, the segmentation of each candidate nodule and the classification of the features computed for each segmented nodule candidate.The presented CAD system is aimed to reduce the number of omissions and to decrease the radiologist scan examination time. Our system locates with the same scheme both internal and juxtapleural nodules. For a correct volume segmentation of the lung parenchyma, the system uses a Region Growing (RG) algorithm and an opening process for including the juxtapleural nodules. The segmentation and the extraction of the suspected nodular lesions from CT images by a lung CAD system constitutes a hard task. In order to solve this key problem, we use a new Stable 3D Mass-Spring Model (MSM) combined with a spline curves reconstruction process. Our model represents concurrently the characteristic gray value range, the directed contour information as well as shape knowledge, which leads to a much more robust and efficient segmentation process. For distinguishing the real nodules among nodule candidates, an additional classification step is applied

  16. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  17. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR - combining techniques towards an objective multiparametric interpretation

    NASA Astrophysics Data System (ADS)

    Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K.

    2015-09-01

    Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination) were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii) how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-)investigations.

  18. Small object detection neurons in female hoverflies.

    PubMed

    Nordström, Karin; O'Carroll, David C

    2006-05-22

    While predators such as dragonflies are dependent on visual detection of moving prey, social interactions make conspecific detection equally important for many non-predatory insects. Specialized 'acute zones' associated with target detection have evolved in several insect groups and are a prominent male-specific feature in many dipteran flies. The physiology of target selective neurons associated with these specialized eye regions has previously been described only from male flies. We show here that female hoverflies (Eristalis tenax) have several classes of neurons within the third optic ganglion (lobula) capable of detecting moving objects smaller than 1 degrees . These neurons have frontal receptive fields covering a large part of the ipsilateral world and are tuned to a broad range of target speeds and sizes. This could make them suitable for detecting targets under a range of natural conditions such as required during predator avoidance or conspecific interactions.

  19. Experimental and numerical investigation of the 3D SPECT photon detection kernel for non-uniform attenuating media

    NASA Astrophysics Data System (ADS)

    Riauka, Terence A.; Hooper, H. Richard; Gortel, Zbigniew W.

    1996-07-01

    Experimental tests for non-uniform attenuating media are performed to validate theoretical expressions for the photon detection kernel, obtained from a recently proposed analytical theory of photon propagation and detection for SPECT. The theoretical multi-dimensional integral expressions for the photon detection kernel, which are computed numerically, describe the probability that a photon emitted from a given source voxel will trigger detection of a photon at a particular projection pixel. The experiments were performed using a cylindrical water-filled phantom with large cylindrical air-filled inserts to simulate inhomogeneity of the medium. A point-like, a short thin cylindrical and a large cylindrical radiation source of were placed at various positions within the phantom. The values numerically calculated from the theoretical kernel expressions are in very good agreement with the experimentally measured data. The significance of Compton-scattered photons in planar image formation is discussed and highlighted by these results. Using both experimental measurements and the calculated values obtained from the theory, the kernel's size is investigated. This is done by determining the square pixel neighbourhood of the gamma camera that must be connected to a particular radiation source voxel to account for a specific fraction of all counts recorded at all camera pixels. It is shown that the kernel's size is primarily dependent upon the source position and the properties of the attenuating medium through Compton scattering events, with 3D depth-dependent collimator resolution playing an important but secondary role, at least for imaging situations involving parallel hole collimation. By considering small point-like sources within a non-uniform elliptical phantom, approximating the human thorax, it is demonstrated

  20. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    PubMed

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program.

  1. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    PubMed

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

  2. Objective Method for Pain Detection/Diagnosis

    DTIC Science & Technology

    2013-11-01

    Diagnosis PRINCIPAL INVESTIGATOR: Bor-rong Chen, PhD CONTRACTING ORGANIZATION...REPORT TYPE Final 3. DATES COVERED 15 May 2013 - 15 November 2013 4. TITLE AND SUBTITLE Objective Method for Pain Detection/ Diagnosis 5a...Design of MoPASS System Architecture MoPASS Sensor Platform Development Streaming Firmware MoPASS PC Software MoPASS Desktop Application

  3. Neutron detection and characterization for non-proliferation applications using 3D computer optical memories [Use of 3D optical computer memory for radiation detectors/dosimeters. Final progress report

    SciTech Connect

    Gary W. Phillips

    2000-12-20

    We have investigated 3-dimensional optical random access memory (3D-ORAM) materials for detection and characterization of charged particles of neutrons by detecting tracks left by the recoil charged particles produced by the neutrons. We have characterized the response of these materials to protons, alpha particles and carbon-12 nuclei as a functions of dose and energy. We have observed individual tracks using scanning electron microscopy and atomic force microscopy. We are investigating the use of neural net analysis to characterize energetic neutron fields from their track structure in these materials.

  4. Modeling and simulation of a 3D-CMOS silicon photodetector for low-intensity light detection

    NASA Astrophysics Data System (ADS)

    Sabri Alirezaei, Iman; Burte, Edmund P.

    2016-03-01

    This paper presents a design and simulation of a novel high performance 3D-silicon photodetector for implementing in the low intensity light detection at room temperature (300K). The photodetector is modeled by inspiration of general MEMS fabrication to make a 3D- structure in the silicon substrate using a bulk micromachining process, and based on a complementary metal-oxide semiconductor (CMOS) technology. The design includes a vertical n+/p junction as an optical window for lateral illumination. The simulation is carried out using COMSOL Multiphysics relying on theoretical and physical concepts, and then, the assessment of the results is done by the numerical analysis with SILVACO (Atlas) device simulator. Light is regarded as a monochromatic beam with a wavelength of 633nm that is placed 1μm far from the optical window. The simulation is considered under the reverse bias dc voltage in the steadystate. We present photocurrent-voltage (Iph-V) characteristics under different light intensities (2… 10[mW/cm2]), and dark current-voltage (Id-V) characteristics. Comparative studies of sensitivity dependence on the dopant concentration in the substrate as an intrinsic region are accomplished utilizing two different p-type silicon substrates with 1×1015 [1/cm3] and 4×1012 [1/cm3] doping concentration. Moreover, the sensitivity is evaluated with respect to the active substrate thickness. The simulated results confirmed that the high optical sensitivity of the photodetector with low dark current can be realized in this model.

  5. Detecting objects in radiographs for homeland security

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Snyder, Hans

    2005-05-01

    We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.

  6. The subjective experience of object recognition: comparing metacognition for object detection and object categorization.

    PubMed

    Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J

    2014-05-01

    Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).

  7. DETECTABILITY OF OORT CLOUD OBJECTS USING KEPLER

    SciTech Connect

    Ofek, Eran O.; Nakar, Ehud

    2010-03-01

    The size distribution and total mass of objects in the Oort Cloud have important implications to the theory of planet formation, including the properties of, and the processes taking place in the early solar system. We discuss the potential of space missions, such as Kepler and CoRoT, designed to discover transiting exoplanets, to detect Oort Cloud, Kuiper Belt, and main belt objects by occultations of background stars. Relying on published dynamical estimates of the content of the Oort Cloud, we find that Kepler's main program is expected to detect between 0 and {approx}100 occultation events by deca-kilometer-sized Oort Cloud objects. The occultation rate depends on the mass of the Oort Cloud, the distance to its 'inner edge', and the size distribution of its objects. In contrast, Kepler is unlikely to find occultations by Kuiper Belt or main belt asteroids, mainly due to the fact that it is observing a high ecliptic latitude field. Occultations by solar system objects will appear as a photometric deviation in a single measurement, implying that the information regarding the timescale and light-curve shape of each event is lost. We present statistical methods that have the potential to verify the authenticity of occultation events by solar system objects, to estimate the distance to the occulting population, and to constrain their size distribution. Our results are useful for planning of future space-based exoplanet searches in a way that will maximize the probability of detecting solar system objects, without hampering the main science goals.

  8. Portable, Easy-to-Operate, and Antifouling Microcapsule Array Chips Fabricated by 3D Ice Printing for Visual Target Detection.

    PubMed

    Zhang, Hong-Ze; Zhang, Fang-Ting; Zhang, Xiao-Hui; Huang, Dong; Zhou, Ying-Lin; Li, Zhi-Hong; Zhang, Xin-Xiang

    2015-06-16

    Herein, we proposed a portable, easy-to-operate, and antifouling microcapsule array chip for target detection. This prepackaged chip was fabricated by innovative and cost-effective 3D ice printing integrating with photopolymerization sealing which could eliminate complicated preparation of wet chemistry and effectively resist outside contaminants. Only a small volume of sample (2 μL for each microcapsule) was consumed to fulfill the assay. All the reagents required for the analysis were stored in ice form within the microcapsule before use, which guaranteed the long-term stability of microcapsule array chips. Nitrite and glucose were chosen as models for proof of concept to achieve an instant quantitative detection by naked eyes without the need of external sophisticated instruments. The simplicity, low cost, and small sample consumption endowed ice-printing microcapsule array chips with potential commercial value in the fields of on-site environmental monitoring, medical diagnostics, and rapid high-throughput point-of-care quantitative assay.

  9. Rapid review: Estimates of incremental breast cancer detection from tomosynthesis (3D-mammography) screening in women with dense breasts.

    PubMed

    Houssami, Nehmat; Turner, Robin M

    2016-12-01

    High breast tissue density increases breast cancer (BC) risk, and the risk of an interval BC in mammography screening. Density-tailored screening has mostly used adjunct imaging to screen women with dense breasts, however, the emergence of tomosynthesis (3D-mammography) provides an opportunity to steer density-tailored screening in new directions potentially obviating the need for adjunct imaging. A rapid review (a streamlined evidence synthesis) was performed to summarise data on tomosynthesis screening in women with heterogeneously dense or extremely dense breasts, with the aim of estimating incremental (additional) BC detection attributed to tomosynthesis in comparison with standard 2D-mammography. Meta-analysed data from prospective trials comparing these mammography modalities in the same women (N = 10,188) in predominantly biennial screening showed significant incremental BC detection of 3.9/1000 screens attributable to tomosynthesis (P < 0.001). Studies comparing different groups of women screened with tomosynthesis (N = 103,230) or with 2D-mammography (N = 177,814) yielded a pooled difference in BC detection of 1.4/1000 screens representing significantly higher BC detection in tomosynthesis-screened women (P < 0.001), and a pooled difference for recall of -23.3/1000 screens representing significantly lower recall in tomosynthesis-screened groups (P < 0.001), than for 2D-mammography. These estimates can inform planning of future trials of density-tailored screening and may guide discussion of screening women with dense breasts.

  10. One-Pot Synthesis of Fe3O4 Nanoparticle Loaded 3D Porous Graphene Nanocomposites with Enhanced Nanozyme Activity for Glucose Detection.

    PubMed

    Wang, Qingqing; Zhang, Xueping; Huang, Liang; Zhang, Zhiquan; Dong, Shaojun

    2017-03-01

    A novel one-pot strategy is proposed to fabricate 3D porous graphene (3D GN) decorated with Fe3O4 nanoparticles (Fe3O4 NPs) by using hemin as iron source. During the process, graphene oxide was simultaneously reduced and self-assembled to form 3D graphene hydrogel while Fe3O4 NPs synthesized from hemin distributed uniformly on 3D GN. The preparation process is simple, facile, economical, and green. The obtained freeze-dried product (3D GH-5) exhibits outstanding peroxidase-like activity. Compared to the traditional 2D graphene-based nanocomposites, the introduced 3D porous structure dramatically improved the catalytic activity, as well as the catalysis velocity and its affinity for substrate. The high catalytic activity could be ascribed to the formation of Fe3O4 NPs and 3D porous graphene structures. Based on its peroxidase-like activity, 3D GH-5 was used for colorimetric determination of glucose with a low detection limit of 0.8 μM.

  11. Object detection system using SPAD proximity detectors

    NASA Astrophysics Data System (ADS)

    Stark, Laurence; Raynor, Jeffrey M.; Henderson, Robert K.

    2011-10-01

    This paper presents an object detection system based upon the use of multiple single photon avalanche diode (SPAD) proximity sensors operating upon the time-of-flight (ToF) principle, whereby the co-ordinates of a target object in a coordinate system relative to the assembly are calculated. The system is similar to a touch screen system in form and operation except that the lack of requirement of a physical sensing surface provides a novel advantage over most existing touch screen technologies. The sensors are controlled by FPGA-based firmware and each proximity sensor in the system measures the range from the sensor to the target object. A software algorithm is implemented to calculate the x-y coordinates of the target object based on the distance measurements from at least two separate sensors and the known relative positions of these sensors. Existing proximity sensors were capable of determining the distance to an object with centimetric accuracy and were modified to obtain a wide field of view in the x-y axes with low beam angle in z in order to provide a detection area as large as possible. Design and implementation of the firmware, electronic hardware, mechanics and optics are covered in the paper. Possible future work would include characterisation with alternative designs of proximity sensors, as this is the component which determines the highest achievable accur1acy of the system.

  12. Track-before-detect procedures for detection of extended object

    NASA Astrophysics Data System (ADS)

    Fan, Ling; Zhang, Xiaoling; Shi, Jun

    2011-12-01

    In this article, we present a particle filter (PF)-based track-before-detect (PF TBD) procedure for detection of extended objects whose shape is modeled by an ellipse. By incorporating of an existence variable and the target shape parameters into the state vector, the proposed algorithm performs joint estimation of the target presence/absence, trajectory and shape parameters under unknown nuisance parameters (target power and noise variance). Simulation results show that the proposed algorithm has good detection and tracking capabilities for extended objects.

  13. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  14. Object detection using pulse coupled neural networks.

    PubMed

    Ranganath, H S; Kuntimad, G

    1999-01-01

    This paper describes an object detection system based on pulse coupled neural networks. The system is designed and implemented to illustrate the power, flexibility and potential the pulse coupled neural networks have in real-time image processing. In the preprocessing stage, a pulse coupled neural network suppresses noise by smoothing the input image. In the segmentation stage, a second pulse coupled neural-network iteratively segments the input image. During each iteration, with the help of a control module, the segmentation network deletes regions that do not satisfy the retention criteria from further processing and produces an improved segmentation of the retained image. In the final stage each group of connected regions that satisfies the detection criteria is identified as an instance of the object of interest.

  15. Detecting 3D Vegetation Structure with the Galileo Space Probe: Can a Distant Probe Detect Vegetation Structure on Earth?

    PubMed

    Doughty, Christopher E; Wolf, Adam

    2016-01-01

    Sagan et al. (1993) used the Galileo space probe data and first principles to find evidence of life on Earth. Here we ask whether Sagan et al. (1993) could also have detected whether life on Earth had three-dimensional structure, based on the Galileo space probe data. We reanalyse the data from this probe to see if structured vegetation could have been detected in regions with abundant photosynthetic pigments through the anisotropy of reflected shortwave radiation. We compare changing brightness of the Amazon forest (a region where Sagan et al. (1993) noted a red edge in the reflectance spectrum, indicative of photosynthesis) as the planet rotates to a common model of reflectance anisotropy and found measured increase of surface reflectance of 0.019 ± 0.003 versus a 0.007 predicted from only anisotropic effects. We hypothesize the difference was due to minor cloud contamination. However, the Galileo dataset had only a small change in phase angle (sun-satellite position) which reduced the observed anisotropy signal and we demonstrate that theoretically if the probe had a variable phase angle between 0-20°, there would have been a much larger predicted change in surface reflectance of 0.1 and under such a scenario three-dimensional vegetation structure on Earth could possibly have been detected. These results suggest that anisotropic effects may be useful to help determine whether exoplanets have three-dimensional vegetation structure in the future, but that further comparisons between empirical and theoretical results are first necessary.

  16. Detecting 3D Vegetation Structure with the Galileo Space Probe: Can a Distant Probe Detect Vegetation Structure on Earth?

    PubMed Central

    2016-01-01

    Sagan et al. (1993) used the Galileo space probe data and first principles to find evidence of life on Earth. Here we ask whether Sagan et al. (1993) could also have detected whether life on Earth had three-dimensional structure, based on the Galileo space probe data. We reanalyse the data from this probe to see if structured vegetation could have been detected in regions with abundant photosynthetic pigments through the anisotropy of reflected shortwave radiation. We compare changing brightness of the Amazon forest (a region where Sagan et al. (1993) noted a red edge in the reflectance spectrum, indicative of photosynthesis) as the planet rotates to a common model of reflectance anisotropy and found measured increase of surface reflectance of 0.019 ± 0.003 versus a 0.007 predicted from only anisotropic effects. We hypothesize the difference was due to minor cloud contamination. However, the Galileo dataset had only a small change in phase angle (sun-satellite position) which reduced the observed anisotropy signal and we demonstrate that theoretically if the probe had a variable phase angle between 0–20°, there would have been a much larger predicted change in surface reflectance of 0.1 and under such a scenario three-dimensional vegetation structure on Earth could possibly have been detected. These results suggest that anisotropic effects may be useful to help determine whether exoplanets have three-dimensional vegetation structure in the future, but that further comparisons between empirical and theoretical results are first necessary. PMID:27973530

  17. Computer aided detection of surgical retained foreign object for prevention

    SciTech Connect

    Hadjiiski, Lubomir Marentis, Theodore C.; Rondon, Lucas; Chan, Heang-Ping; Chaudhury, Amrita R.; Chronis, Nikolaos

    2015-03-15

    Purpose: Surgical retained foreign objects (RFOs) have significant morbidity and mortality. They are associated with approximately $1.5 × 10{sup 9} annually in preventable medical costs. The detection accuracy of radiographs for RFOs is a mediocre 59%. The authors address the RFO problem with two complementary technologies: a three-dimensional (3D) gossypiboma micro tag, the μTag that improves the visibility of RFOs on radiographs, and a computer aided detection (CAD) system that detects the μTag. It is desirable for the CAD system to operate in a high specificity mode in the operating room (OR) and function as a first reader for the surgeon. This allows for fast point of care results and seamless workflow integration. The CAD system can also operate in a high sensitivity mode as a second reader for the radiologist to ensure the highest possible detection accuracy. Methods: The 3D geometry of the μTag produces a similar two dimensional (2D) depiction on radiographs regardless of its orientation in the human body and ensures accurate detection by a radiologist and the CAD. The authors created a data set of 1800 cadaver images with the 3D μTag and other common man-made surgical objects positioned randomly. A total of 1061 cadaver images contained a single μTag and the remaining 739 were without μTag. A radiologist marked the location of the μTag using an in-house developed graphical user interface. The data set was partitioned into three independent subsets: a training set, a validation set, and a test set, consisting of 540, 560, and 700 images, respectively. A CAD system with modules that included preprocessing μTag enhancement, labeling, segmentation, feature analysis, classification, and detection was developed. The CAD system was developed using the training and the validation sets. Results: On the training set, the CAD achieved 81.5% sensitivity with 0.014 false positives (FPs) per image in a high specificity mode for the surgeons in the OR and 96

  18. Fast Feature Pyramids for Object Detection.

    PubMed

    Dollár, Piotr; Appel, Ron; Belongie, Serge; Perona, Pietro

    2014-08-01

    Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).

  19. Automated 3D detection and classification of Giardia lamblia cysts using digital holographic microscopy with partially coherent source

    NASA Astrophysics Data System (ADS)

    El Mallahi, A.; Detavernier, A.; Yourassowsky, C.; Dubois, F.

    2012-06-01

    Over the past century, monitoring of Giardia lamblia became a matter of concern for all drinking water suppliers worldwide. Indeed, this parasitic flagellated protozoan is responsible for giardiasis, a widespread diarrhoeal disease (200 million symptomatic individuals) that can lead immunocompromised individuals to death. The major difficulty raised by Giardia lamblia's cyst, its vegetative transmission form, is its ability to survive for long periods in harsh environments, including the chlorine concentrations and treatment duration used traditionally in water disinfection. Currently, there is a need for a reliable, inexpensive, and easy-to-use sensor for the identification and quantification of cysts in the incoming water. For this purpose, we investigated the use of a digital holographic microscope working with partially coherent spatial illumination that reduces the coherent noise. Digital holography allows one to numerically investigate a volume by refocusing the different plane of depth of a hologram. In this paper, we perform an automated 3D analysis that computes the complex amplitude of each hologram, detects all the particles present in the whole volume given by one hologram and refocuses them if there are out of focus using a refocusing criterion based on the integrated complex amplitude modulus and we obtain the (x,y,z) coordinates of each particle. Then the segmentation of the particles is processed and a set of morphological and textures features characteristic to Giardia lamblia cysts is computed in order to classify each particles in the right classes.

  20. Preliminary study of statistical pattern recognition-based coin counterfeit detection by means of high resolution 3D scanners

    NASA Astrophysics Data System (ADS)

    Leich, Marcus; Kiltz, Stefan; Krätzer, Christian; Dittmann, Jana; Vielhauer, Claus

    2011-03-01

    According to the European Commission around 200,000 counterfeit Euro coins are removed from circulation every year. While approaches exist to automatically detect these coins, satisfying error rates are usually only reached for low quality forgeries, so-called "local classes". High-quality minted forgeries ("common classes") pose a problem for these methods as well as for trained humans. This paper presents a first approach for statistical analysis of coins based on high resolution 3D data acquired with a chromatic white light sensor. The goal of this analysis is to determine whether two coins are of common origin. The test set for these first and new investigations consists of 62 coins from not more than five different sources. The analysis is based on the assumption that, apart from markings caused by wear such as scratches and residue consisting of grease and dust, coins from equal origin have a more similar height field than coins from different mints. First results suggest that the selected approach is heavily affected by influences of wear like dents and scratches and the further research is required the eliminate this influence. A course for future work is outlined.

  1. Development of an artificial compound eye system for three-dimensional object detection.

    PubMed

    Ma, Mengchao; Guo, Fang; Cao, Zhaolou; Wang, Keyi

    2014-02-20

    A compound eye has the advantages of a large field of view, high sensitivity, and compact structure, showing that it can be applicable for 3D object detection. In this work, an artificial compound eye system is developed for 3D object detection, consisting of a layer of lenslets and a prism-like beam-steering lens. A calibration method is developed for this system, with which the correspondences between incident light rays and the relevant image points can be obtained precisely using an active calibration pattern at multiple positions. Theoretically, calibration patterns at two positions are sufficient for system calibration, although more positions will increase the accuracy of the result. 3D positions of point objects are calculated to evaluate the system, which are obtained by the intersection of multiple incident light rays in the least-squares sense. Experimental results show that the system can detect an object with angular accuracy of better than 1 mrad, demonstrating the feasibility of the proposed compound eye system. With a 2D scanning device, the system can be extended for general object detection in 3D space.

  2. Object and activity detection from aerial video

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Shi, Feng; Liu, Xin; Ghazel, Mohsen

    2015-05-01

    Aerial video surveillance has advanced significantly in recent years, as inexpensive high-quality video cameras and airborne platforms are becoming more readily available. Video has become an indispensable part of military operations and is now becoming increasingly valuable in the civil and paramilitary sectors. Such surveillance capabilities are useful for battlefield intelligence and reconnaissance as well as monitoring major events, border control and critical infrastructure. However, monitoring this growing flood of video data requires significant effort from increasingly large numbers of video analysts. We have developed a suite of aerial video exploitation tools that can alleviate mundane monitoring from the analysts, by detecting and alerting objects and activities that require analysts' attention. These tools can be used for both tactical applications and post-mission analytics so that the video data can be exploited more efficiently and timely. A feature-based approach and a pixel-based approach have been developed for Video Moving Target Indicator (VMTI) to detect moving objects at real-time in aerial video. Such moving objects can then be classified by a person detector algorithm which was trained with representative aerial data. We have also developed an activity detection tool that can detect activities of interests in aerial video, such as person-vehicle interaction. We have implemented a flexible framework so that new processing modules can be added easily. The Graphical User Interface (GUI) allows the user to configure the processing pipeline at run-time to evaluate different algorithms and parameters. Promising experimental results have been obtained using these tools and an evaluation has been carried out to characterize their performance.

  3. Radar detection of moving objects around corners

    NASA Astrophysics Data System (ADS)

    Sume, A.; Gustafsson, M.; Jänis, A.; Nilsson, S.; Rahm, J.; Örbom, A.

    2009-05-01

    Detection of moving objects around corners, with no direct line-of-sight to the objects, is demonstrated in experiments using a coherent test-range radar. A setting was built up on the test-range ground consisting of two perpendicular wall sections forming a corner, with an opposite wall, intended to mimic a street scenario on a reduced scale. Two different wall materials were used, viz. light concrete and metallic walls. The latter choice served as reference, with elimination of transmission through the walls, e.g. facilitating comparison with theoretical calculations. Standard radar reflectors were used as one kind of target objects, in horizontal, circular movement, produced by a turntable. A human formed a second target, both walking and at standstill with micro-Doppler movements of body parts. The radar signal was produced by frequency stepping of a gated CW (Continuous Wave) waveform over a bandwidth of 2 or 4 GHz, between 8.5 and 12.5 GHz. Standard Doppler signal processing has been applied, consisting of a double FFT. The first of these produced "range profiles", on which the second FFT was applied for specific range gates, which resulted in Doppler frequency spectra, used for the detection. The reference reflectors as well as the human could be detected in this scenario. The target detections were achieved both in the wave component having undergone specular reflection in the opposite wall (strongest) as well as the diffracted component around the corner. Time-frequency analysis using Short Time Fourier Transform technique brought out micro-Doppler components in the signature of a walking human. These experiments have been complemented with theoretical field calculations and separate reflection measurements of common building materials.

  4. Water Detection Based on Object Reflections

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2012-01-01

    Water bodies are challenging terrain hazards for terrestrial unmanned ground vehicles (UGVs) for several reasons. Traversing through deep water bodies could cause costly damage to the electronics of UGVs. Additionally, a UGV that is either broken down due to water damage or becomes stuck in a water body during an autonomous operation will require rescue, potentially drawing critical resources away from the primary operation and increasing the operation cost. Thus, robust water detection is a critical perception requirement for UGV autonomous navigation. One of the properties useful for detecting still water bodies is that their surface acts as a horizontal mirror at high incidence angles. Still water bodies in wide-open areas can be detected by geometrically locating the exact pixels in the sky that are reflecting on candidate water pixels on the ground, predicting if ground pixels are water based on color similarity to the sky and local terrain features. But in cluttered areas where reflections of objects in the background dominate the appearance of the surface of still water bodies, detection based on sky reflections is of marginal value. Specifically, this software attempts to solve the problem of detecting still water bodies on cross-country terrain in cluttered areas at low cost.

  5. Diagnostic performance of 3D TSE MRI versus 2D TSE MRI of the knee at 1.5 T, with prompt arthroscopic correlation, in the detection of meniscal and cruciate ligament tears*

    PubMed Central

    Chagas-Neto, Francisco Abaeté; Nogueira-Barbosa, Marcello Henrique; Lorenzato, Mário Müller; Salim, Rodrigo; Kfuri-Junior, Maurício; Crema, Michel Daoud

    2016-01-01

    Objective To compare the diagnostic performance of the three-dimensional turbo spin-echo (3D TSE) magnetic resonance imaging (MRI) technique with the performance of the standard two-dimensional turbo spin-echo (2D TSE) protocol at 1.5 T, in the detection of meniscal and ligament tears. Materials and Methods Thirty-eight patients were imaged twice, first with a standard multiplanar 2D TSE MR technique, and then with a 3D TSE technique, both in the same 1.5 T MRI scanner. The patients underwent knee arthroscopy within the first three days after the MRI. Using arthroscopy as the reference standard, we determined the diagnostic performance and agreement. Results For detecting anterior cruciate ligament tears, the 3D TSE and routine 2D TSE techniques showed similar values for sensitivity (93% and 93%, respectively) and specificity (80% and 85%, respectively). For detecting medial meniscal tears, the two techniques also had similar sensitivity (85% and 83%, respectively) and specificity (68% and 71%, respectively). In addition, for detecting lateral meniscal tears, the two techniques had similar sensitivity (58% and 54%, respectively) and specificity (82% and 92%, respectively). There was a substantial to almost perfect intraobserver and interobserver agreement when comparing the readings for both techniques. Conclusion The 3D TSE technique has a diagnostic performance similar to that of the routine 2D TSE protocol for detecting meniscal and anterior cruciate ligament tears at 1.5 T, with the advantage of faster acquisition. PMID:27141127

  6. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  7. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NASA Astrophysics Data System (ADS)

    Visser, R.; Godart, J.; Wauben, D. J. L.; Langendijk, J. A.; van't Veld, A. A.; Korevaar, E. W.

    2016-05-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU’s for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  8. Evaluation of object level change detection techniques

    NASA Astrophysics Data System (ADS)

    Irvine, John M.; Bergeron, Stuart; Hugo, Doug; O'Brien, Michael A.

    2007-04-01

    A variety of change detection (CD) methods have been developed and employed to support imagery analysis for applications including environmental monitoring, mapping, and support to military operations. Evaluation of these methods is necessary to assess technology maturity, identify areas for improvement, and support transition to operations. This paper presents a methodology for conducting this type of evaluation, discusses the challenges, and illustrates the techniques. The evaluation of object-level change detection methods is more complicated than for automated techniques for processing a single image. We explore algorithm performance assessments, emphasizing the definition of the operating conditions (sensor, target, and environmental factors) and the development of measures of performance. Specific challenges include image registration; occlusion due to foliage, cultural clutter and terrain masking; diurnal differences; and differences in viewing geometry. Careful planning, sound experimental design, and access to suitable imagery with image truth and metadata are critical.

  9. Objective vortex detection in an astrophysical dynamo

    NASA Astrophysics Data System (ADS)

    Rempel, E. L.; Chian, A. C.-L.; Beron-Vera, F. J.; Szanyi, S.; Haller, G.

    2017-03-01

    A novel technique for detecting Lagrangian vortices is applied to a helical magnetohydrodynamic dynamo simulation. The vortices are given by tubular level surfaces of the Lagrangian averaged vorticity deviation, the trajectory integral of the normed difference of the vorticity from its spatial mean. This simple method is objective, i.e. invariant under time-dependent rotations and translations of the coordinate frame. We also adapt the technique to use it on magnetic fields and propose the method of integrated averaged current deviation to determine precisely the boundary of magnetic vortices. The relevance of the results for the study of vortices in solar plasmas is discussed.

  10. Computer-aided forensics: metal object detection.

    PubMed

    Kelliher, Timothy; Leue, Bill; Lorensen, Bill; Lauric, Alexandra

    2006-01-01

    Recently, forensic investigators1 have started using diagnostic radiology devices (MRI, CT) to acquire image data from cadavers. This new technology, called the virtual autopsy, has the potential to provide a low cost, non-invasive alternative or supplement to conventional autopsies. New image processing techniques are being developed to highlight forensically relevant information in the images. One such technique is the detection and characterization of metal objects embedded in the cadaver. Analysis of this information across a population with similar causes of death can lead to developing improved safety and protection devices with a corresponding reduction in deaths.

  11. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  12. Improvement on object detection accuracy by using two compound eye systems

    NASA Astrophysics Data System (ADS)

    Ma, Mengchao; Wang, Keyi

    2014-09-01

    Compound eye is a multiple apertures imaging device, indicates that it can be applied for three-dimensional object detection. In our previous report, an artificial compound eye system was developed for 3D object detection. The system consists of a layer of plano-convex microlenses and a prism-like beam steering lens. An innovative multi-position calibration method is developed to relate the incident light rays and the relevant image points. Theoretically, one compound eye system alone is capable of 3D objects detection. However, the detection accuracy is limited due to the relatively small baseline between the adjacent microlenses. In this work, an equivalent large baseline is obtained by using a two compound eyes system. Preliminary experiments were performed to verify the improvement on the accuracy of 3D object detection. The experimental results with two compound eyes are compared with that obtained by only one compound eye. Experimental results show that the system with two compound eyes can detect an object much more accurately, indicating the feasibility and flexibility of the proposed method.

  13. Detecting abandoned objects using interacting multiple models

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Münch, David; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael

    2015-10-01

    In recent years, the wide use of video surveillance systems has caused an enormous increase in the amount of data that has to be stored, monitored, and processed. As a consequence, it is crucial to support human operators with automated surveillance applications. Towards this end an intelligent video analysis module for real-time alerting in case of abandoned objects in public spaces is proposed. The overall processing pipeline consists of two major parts. First, person motion is modeled using an Interacting Multiple Model (IMM) filter. The IMM filter estimates the state of a person according to a finite-state, discrete-time Markov chain. Second, the location of persons that stay at a fixed position defines a region of interest, in which a nonparametric background model with dynamic per-pixel state variables identifies abandoned objects. In case of a detected abandoned object, an alarm event is triggered. The effectiveness of the proposed system is evaluated on the PETS 2006 dataset and the i-Lids dataset, both reflecting prototypical surveillance scenarios.

  14. Determining root correspondence between previously and newly detected objects

    DOEpatents

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  15. A Novel Abandoned Object Detection System Based on Three-Dimensional Image Information

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Gao, Jing; Zou, Jinlin

    2015-01-01

    A new idea of an abandoned object detection system for road traffic surveillance systems based on three-dimensional image information is proposed in this paper to prevent traffic accidents. A novel Binocular Information Reconstruction and Recognition (BIRR) algorithm is presented to implement the new idea. As initial detection, suspected abandoned objects are detected by the proposed static foreground region segmentation algorithm based on surveillance video from a monocular camera. After detection of suspected abandoned objects, three-dimensional (3D) information of the suspected abandoned object is reconstructed by the proposed theory about 3D object information reconstruction with images from a binocular camera. To determine whether the detected object is hazardous to normal road traffic, road plane equation and height of suspected-abandoned object are calculated based on the three-dimensional information. Experimental results show that this system implements fast detection of abandoned objects and this abandoned object system can be used for road traffic monitoring and public area surveillance. PMID:25806869