Sample records for randomized hough transform

  1. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  2. Hough transform method for track finding in center drift chamber

    NASA Astrophysics Data System (ADS)

    Azmi, K. A. Mohammad Kamal; Wan Abdullah, W. A. T.; Ibrahim, Zainol Abidin

    2016-01-01

    Hough transform is a global tracking method used which had been expected to be faster approach for tracking the circular pattern of electron moving in Center Drift Chamber (CDC), by transforming the point of hit into a circular curve. This paper present the implementation of hough transform method for the reconstruction of tracks in Center Drift Chamber (CDC) which have been generated by random number in C language programming. Result from implementation of this method shows higher peak of circle parameter value (xc,yc,rc) that indicate the similarity value of the parameter needed for circular track in CDC for charged particles in the region of CDC.

  3. Hough transform method for track finding in center drift chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmi, K. A. Mohammad Kamal, E-mail: khasmidatul@siswa.um.edu.my; Wan Abdullah, W. A. T., E-mail: wat@um.edu.my; Ibrahim, Zainol Abidin

    Hough transform is a global tracking method used which had been expected to be faster approach for tracking the circular pattern of electron moving in Center Drift Chamber (CDC), by transforming the point of hit into a circular curve. This paper present the implementation of hough transform method for the reconstruction of tracks in Center Drift Chamber (CDC) which have been generated by random number in C language programming. Result from implementation of this method shows higher peak of circle parameter value (xc,yc,rc) that indicate the similarity value of the parameter needed for circular track in CDC for charged particlesmore » in the region of CDC.« less

  4. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  5. Fetal head detection and measurement in ultrasound images by a direct inverse randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Tan, Jinglu; Floyd, Randall C.

    2005-04-01

    Object detection in ultrasound fetal images is a challenging task for the relatively low resolution and low signal-to-noise ratio. A direct inverse randomized Hough transform (DIRHT) is developed for filtering and detecting incomplete curves in images with strong noise. The DIRHT combines the advantages of both the inverse and the randomized Hough transforms. In the reverse image, curves are highlighted while a large number of unrelated pixels are removed, demonstrating a "curve-pass filtering" effect. Curves are detected by iteratively applying the DIRHT to the filtered image. The DIRHT was applied to head detection and measurement of the biparietal diameter (BPD) and head circumference (HC). No user input or geometric properties of the head were required for the detection. The detection and measurement took 2 seconds for each image on a PC. The inter-run variations and the differences between the automatic measurements and sonographers" manual measurements were small compared with published inter-observer variations. The results demonstrated that the automatic measurements were consistent and accurate. This method provides a valuable tool for fetal examinations.

  6. Detection and Estimation of Multi-Pulse LFMCW Radar Signals

    DTIC Science & Technology

    2010-01-01

    the Hough transform (HT) of the Wigner - Ville distribution ( WVD ), has been shown to be equivalent to the generalized likelihood ratio test (GLRT...virginia.edu Abstract— The Wigner - Ville Hough transform (WVHT) has been applied to detect and estimate the parameters of linear frequency-modulated...well studied in the literature. One of the most prominent techniques is the Wigner - Ville Hough Transform [8], [9]. The Wigner - Ville Hough transform (WVHT

  7. Fetal head detection and measurement in ultrasound images by an iterative randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Tan, Jinglu; Floyd, Randall C.

    2004-05-01

    This paper describes an automatic method for measuring the biparietal diameter (BPD) and head circumference (HC) in ultrasound fetal images. A total of 217 ultrasound images were segmented by using a K-Mean classifier, and the head skull was detected in 214 of the 217 cases by an iterative randomized Hough transform developed for detection of incomplete curves in images with strong noise without user intervention. The automatic measurements were compared with conventional manual measurements by sonographers and a trained panel. The inter-run variations and differences between the automatic and conventional measurements were small compared with published inter-observer variations. The results showed that the automated measurements were as reliable as the expert measurements and more consistent. This method has great potential in clinical applications.

  8. Lane detection using Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Mongkonyong, Peerawat; Nuthong, Chaiwat; Siddhichai, Supakorn; Yamakita, Masaki

    2018-01-01

    According to the report of the Royal Thai Police between 2006 and 2015, lane changing without consciousness is one of the most accident causes. To solve this problem, many methods are considered. Lane Departure Warning System (LDWS) is considered to be one of the potential solutions. LDWS is a mechanism designed to warn the driver when the vehicle begins to move out of its current lane. LDWS contains many parts including lane boundary detection, driver warning and lane marker tracking. This article focuses on the lane boundary detection part. The proposed lane boundary detection detects the lines of the image from the input video and selects the lane marker of the road surface from those lines. Standard Hough Transform (SHT) and Randomized Hough Transform (RHT) are considered in this article. They are used to extract lines of an image. SHT extracts the lines from all of the edge pixels. RHT extracts only the lines randomly picked by the point pairs from edge pixels. RHT algorithm reduces the time and memory usage when compared with SHT. The increase of the threshold value in RHT will increase the voted limit of the line that has a high possibility to be the lane marker, but it also consumes the time and memory. By comparison between SHT and RHT with the different threshold values, 500 frames of input video from the front car camera will be processed. The accuracy and the computational time of RHT are similar to those of SHT in the result of the comparison.

  9. Polar exponential sensor arrays unify iconic and Hough space representation

    NASA Technical Reports Server (NTRS)

    Weiman, Carl F. R.

    1990-01-01

    The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.

  10. Determination of mango fruit from binary image using randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Rizon, Mohamed; Najihah Yusri, Nurul Ain; Abdul Kadir, Mohd Fadzil; bin Mamat, Abd. Rasid; Abd Aziz, Azim Zaliha; Nanaa, Kutiba

    2015-12-01

    A method of detecting mango fruit from RGB input image is proposed in this research. From the input image, the image is processed to obtain the binary image using the texture analysis and morphological operations (dilation and erosion). Later, the Randomized Hough Transform (RHT) method is used to find the best ellipse fits to each binary region. By using the texture analysis, the system can detect the mango fruit that is partially overlapped with each other and mango fruit that is partially occluded by the leaves. The combination of texture analysis and morphological operator can isolate the partially overlapped fruit and fruit that are partially occluded by leaves. The parameters derived from RHT method was used to calculate the center of the ellipse. The center of the ellipse acts as the gripping point for the fruit picking robot. As the results, the rate of detection was up to 95% for fruit that is partially overlapped and partially covered by leaves.

  11. A method to analyze molecular tagging velocimetry data using the Hough transform.

    PubMed

    Sanchez-Gonzalez, R; McManamen, B; Bowersox, R D W; North, S W

    2015-10-01

    The development of a method to analyze molecular tagging velocimetry data based on the Hough transform is presented. This method, based on line fitting, parameterizes the grid lines "written" into a flowfield. Initial proof-of-principle illustration of this method was performed to obtain two-component velocity measurements in the wake of a cylinder in a Mach 4.6 flow, using a data set derived from computational fluid dynamics simulations. The Hough transform is attractive for molecular tagging velocimetry applications since it is capable of discriminating spurious features that can have a biasing effect in the fitting process. Assessment of the precision and accuracy of the method were also performed to show the dependence on analysis window size and signal-to-noise levels. The accuracy of this Hough transform-based method to quantify intersection displacements was determined to be comparable to cross-correlation methods. The employed line parameterization avoids the assumption of linearity in the vicinity of each intersection, which is important in the limit of drastic grid deformations resulting from large velocity gradients common in high-speed flow applications. This Hough transform method has the potential to enable the direct and spatially accurate measurement of local vorticity, which is important in applications involving turbulent flowfields. Finally, two-component velocity determinations using the Hough transform from experimentally obtained images are presented, demonstrating the feasibility of the proposed analysis method.

  12. Vanishing points detection using combination of fast Hough transform and deep learning

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry

    2018-04-01

    In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.

  13. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design

    NASA Astrophysics Data System (ADS)

    Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas

    2011-03-01

    The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.

  14. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  15. Search Radar Track-Before-Detect Using the Hough Transform.

    DTIC Science & Technology

    1995-03-01

    before - detect processing method which allows previous data to help in target detection. The technique provides many advantages compared to...improved target detection scheme, applicable to search radars, using the Hough transform image processing technique. The system concept involves a track

  16. Parallel Monte Carlo Search for Hough Transform

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  17. A novel approach to Hough Transform for implementation in fast triggers

    NASA Astrophysics Data System (ADS)

    Pozzobon, Nicola; Montecassiano, Fabio; Zotto, Pierluigi

    2016-10-01

    Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled.

  18. Automatic needle segmentation in 3D ultrasound images using 3D Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT algorithm was developed to detect needles quickly without any information of the 3D US images. The needle segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.

  19. Hough transform as a tool support building roof detection. (Polish Title: Transformata Hough'a jako narzędzie wspomagające wykrywanie dachów budynków)

    NASA Astrophysics Data System (ADS)

    Borowiec, N.

    2013-12-01

    Gathering information about the roof shapes of the buildings is still current issue. One of the many sources from which we can obtain information about the buildings is the airborne laser scanning. However, detect information from cloud o points about roofs of building automatically is still a complex task. You can perform this task by helping the additional information from other sources, or based only on Lidar data. This article describes how to detect the building roof only from a point cloud. To define the shape of the roof is carried out in three tasks. The first step is to find the location of the building, the second is the precise definition of the edge, while the third is an indication of the roof planes. First step based on the grid analyses. And the next two task based on Hough Transformation. Hough transformation is a method of detecting collinear points, so a perfect match to determine the line describing a roof. To properly determine the shape of the roof is not enough only the edges, but it is necessary to indicate roofs. Thus, in studies Hough Transform, also served as a tool for detection of roof planes. The only difference is that the tool used in this case is a three-dimensional.

  20. Slant rectification in Russian passport OCR system using fast Hough transform

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Bezmaternykh, Pavel; Nikolaev, Dmitry; Arlazarov, Vladimir

    2017-03-01

    In this paper, we introduce slant detection method based on Fast Hough Transform calculation and demonstrate its application in industrial system for Russian passports recognition. About 1.5% of this kind of documents appear to be slant or italic. This fact reduces recognition rate, because Optical Recognition Systems are normally designed to process normal fonts. Our method uses Fast Hough Transform to analyse vertical strokes of characters extracted with the help of x-derivative of a text line image. To improve the quality of detector we also introduce field grouping rules. The resulting algorithm allowed to reach high detection quality. Almost all errors of considered approach happen on passports of nonstandard fonts, while slant detector works in appropriate way.

  1. Circle Hough transform implementation for dots recognition in braille cells

    NASA Astrophysics Data System (ADS)

    Jacinto Gómez, Edwar; Montiel Ariza, Holman; Martínez Sarmiento, Fredy Hernán.

    2017-02-01

    This paper shows a technique based on CHT (Circle Hough Transform) to achieve the optical Braille recognition (OBR). Unlike other papers developed around the same topic, this one is made by using Hough Transform to process the recognition and transcription of Braille cells, proving CHT to be an appropriate technique to go over different non-systematics factors who can affect the process, as the paper type where the text to traduce is placed, some lightning factors, input image resolution and some flaws derived from the capture process, which is realized using a scanner. Tests are performed with a local database using text generated by visual nondisabled people and some transcripts by sightless people; all of this with the support of National Institute for Blind People (INCI for their Spanish acronym) placed in Colombia.

  2. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  3. A Real-Time System for Lane Detection Based on FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Xiao, Jing; Li, Shutao; Sun, Bin

    2016-12-01

    This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.

  4. A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography

    DTIC Science & Technology

    2010-04-01

    distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT A global probabilistic fiber tracking approach based on the voting process provided by...umn.edu 2 ABSTRACT A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in...criteria for aligning curves and particularly tracts. In this work, we present a global probabilistic approach inspired by the voting procedure provided

  5. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  6. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    NASA Astrophysics Data System (ADS)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  7. A novel algorithm for osteoarthritis detection in Hough domain

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Poria, Nilanjan; Chakraborty, Rajanya; Pratiher, Sawon; Mukherjee, Sukanya; Panigrahi, Prasanta K.

    2018-02-01

    Background subtraction of knee MRI images has been performed, followed by edge detection through canny edge detector. In order to avoid the discontinuities among edges, Daubechies-4 (Db-4) discrete wavelet transform (DWT) methodology is applied for the smoothening of edges identified through canny edge detector. The approximation coefficients of Db-4, having highest energy is selected to get rid of discontinuities in edges. Hough transform is then applied to find imperfect knee locations, as a function of distance (r) and angle (θ). The final outcome of the linear Hough transform is a two-dimensional array i.e., the accumulator space (r, θ) where one dimension of this matrix is the quantized angle θ and the other dimension is the quantized distance r. A novel algorithm has been suggested such that any deviation from the healthy knee bone structure for diseases like osteoarthritis can clearly be depicted on the accumulator space.

  8. Mobile robot motion estimation using Hough transform

    NASA Astrophysics Data System (ADS)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  9. Analysis of line structure in handwritten documents using the Hough transform

    NASA Astrophysics Data System (ADS)

    Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin

    2010-01-01

    In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.

  10. Textual blocks rectification method based on fast Hough transform analysis in identity documents recognition

    NASA Astrophysics Data System (ADS)

    Bezmaternykh, P. V.; Nikolaev, D. P.; Arlazarov, V. L.

    2018-04-01

    Textual blocks rectification or slant correction is an important stage of document image processing in OCR systems. This paper considers existing methods and introduces an approach for the construction of such algorithms based on Fast Hough Transform analysis. A quality measurement technique is proposed and obtained results are shown for both printed and handwritten textual blocks processing as a part of an industrial system of identity documents recognition on mobile devices.

  11. Tiled fuzzy Hough transform for crack detection

    NASA Astrophysics Data System (ADS)

    Vaheesan, Kanapathippillai; Chandrakumar, Chanjief; Mathavan, Senthan; Kamal, Khurram; Rahman, Mujib; Al-Habaibeh, Amin

    2015-04-01

    Surface cracks can be the bellwether of the failure of any component under loading as it indicates the component's fracture due to stresses and usage. For this reason, crack detection is indispensable for the condition monitoring and quality control of road surfaces. Pavement images have high levels of intensity variation and texture content, hence the crack detection is difficult. Moreover, shallow cracks result in very low contrast image pixels making their detection difficult. For these reasons, studies on pavement crack detection is active even after years of research. In this paper, the fuzzy Hough transform is employed, for the first time to detect cracks on any surface. The contribution of texture pixels to the accumulator array is reduced by using the tiled version of the Hough transform. Precision values of 78% and a recall of 72% are obtaining for an image set obtained from an industrial imaging system containing very low contrast cracking. When only high contrast crack segments are considered the values move to mid to high 90%.

  12. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  13. The coordinate system of the eye in cataract surgery: Performance comparison of the circle Hough transform and Daugman's algorithm

    NASA Astrophysics Data System (ADS)

    Vlachynska, Alzbeta; Oplatkova, Zuzana Kominkova; Sramka, Martin

    2017-07-01

    The aim of the work is to determine the coordinate system of an eye and insert a polar-axis system into images captured by a slip lamp. The image of the eye with the polar axis helps a surgeon accurately implant toric intraocular lens in the required position/rotation during the cataract surgery. In this paper, two common algorithms for pupil detection are compared: the circle Hough transform and Daugman's algorithm. The procedures were tested and analysed on the anonymous data set of 128 eyes captured at Gemini eye clinic in 2015.

  14. Cumulus cloud base height estimation from high spatial resolution Landsat data - A Hough transform approach

    NASA Technical Reports Server (NTRS)

    Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh

    1992-01-01

    A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.

  15. Aerial Imagery and LIDAR Data Fusion for Unambiguous Extraction of Adjacent Level-Buildings Footprints

    NASA Astrophysics Data System (ADS)

    Mola Ebrahimi, S.; Arefi, H.; Rasti Veis, H.

    2017-09-01

    Our paper aims to present a new approach to identify and extract building footprints using aerial images and LiDAR data. Employing an edge detector algorithm, our method first extracts the outer boundary of buildings, and then by taking advantage of Hough transform and extracting the boundary of connected buildings in a building block, it extracts building footprints located in each block. The proposed method first recognizes the predominant leading orientation of a building block using Hough transform, and then rotates the block according to the inverted complement of the dominant line's angle. Therefore the block poses horizontally. Afterwards, by use of another Hough transform, vertical lines, which might be the building boundaries of interest, are extracted and the final building footprints within a block are obtained. The proposed algorithm is implemented and tested on the urban area of Zeebruges, Belgium(IEEE Contest,2015). The areas of extracted footprints are compared to the corresponding areas in the reference data and mean error is equal to 7.43 m2. Besides, qualitative and quantitative evaluations suggest that the proposed algorithm leads to acceptable results in automated precise extraction of building footprints.

  16. Volcanoes Distribution in Linear Segmentation of Mariana Arc

    NASA Astrophysics Data System (ADS)

    Andikagumi, H.; Macpherson, C.; McCaffrey, K. J. W.

    2016-12-01

    A new method has been developed to describe better volcanoes distribution pattern within Mariana Arc. A previous study assumed the distribution of volcanoes in the Mariana Arc is described by a small circle distribution which reflects the melting processes in a curved subduction zone. The small circle fit to this dataset used in the study, comprised 12 -mainly subaerial- volcanoes from Smithsonian Institute Global Volcanism Program, was reassessed by us to have a root-mean-square misfit of 2.5 km. The same method applied to a more complete dataset from Baker et al. (2008), consisting 37 subaerial and submarine volcanoes, resulted in an 8.4 km misfit. However, using the Hough Transform method on the larger dataset, lower misfits of great circle segments were achieved (3.1 and 3.0 km) for two possible segments combination. The results indicate that the distribution of volcanoes in the Mariana Arc is better described by a great circle pattern, instead of small circle. Variogram and cross-variogram analysis on volcano spacing and volume shows that there is spatial correlation between volcanoes between 420 and 500 km which corresponds to the maximum segmentation lengths from Hough Transform (320 km). Further analysis of volcano spacing by the coefficient of variation (Cv), shows a tendency toward not-random distribution as the Cv values are closer to zero than one. These distributions are inferred to be associated with the development of normal faults at the back arc as their Cv values also tend towards zero. To analyse whether volcano spacing is random or not, Cv values were simulated using a Monte Carlo method with random input. Only the southernmost segment has allowed us to reject the null hypothesis that volcanoes are randomly spaced at 95% confidence level by 0.007 estimated probability. This result shows infrequent regularity in volcano spacing by chance so that controlling factor in lithospheric scale should be analysed with different approach (not from random number generator). Sunda Arc which has been studied to have en enchelon segmentation and larger number of volcanoes will be further studied to understand particular upper plate influence in volcanoes distribution.

  17. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  18. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  19. Randomized Hough transform filter for echo extraction in DLR

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You

    2016-11-01

    The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.

  20. Traffic Pattern Detection Using the Hough Transformation for Anomaly Detection to Improve Maritime Domain Awareness

    DTIC Science & Technology

    2013-12-01

    Programming code in the Python language used in AIS data preprocessing is contained in Appendix A. The MATLAB programming code used to apply the Hough...described in Chapter III is applied to archived AIS data in this chapter. The implementation of the method, including programming techniques used, is...is contained in the second. To provide a proof of concept for the algorithm described in Chapter III, the PYTHON programming language was used for

  1. Partial fingerprint identification algorithm based on the modified generalized Hough transform on mobile device

    NASA Astrophysics Data System (ADS)

    Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande

    2018-04-01

    Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.

  2. Diaphragm motion quantification in megavoltage cone-beam CT projection images.

    PubMed

    Chen, Mingqing; Siochi, R Alfredo

    2010-05-01

    To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.

  3. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  4. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    NASA Astrophysics Data System (ADS)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  5. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  6. Hough transform for human action recognition

    NASA Astrophysics Data System (ADS)

    Siemon, Mia S. N.

    2016-09-01

    Nowadays, the demand of computer analysis, especially regarding team sports, continues drastically growing. More and more decisions are made by electronic devices for the live to become `easier' to a certain context. There already exist application areas in sports, during which critical situations are being handled by means of digital software. This paper aims at the evaluation and introduction to the necessary foundation which would make it possible to develop a concept similar to that of `hawk-eye', a decision-making program to evaluate the impact of a ball with respect to a target line and to apply it to the sport of volleyball. The pattern recognition process is in this case performed by means of the mathematical model of Hough transform which is able of identifying relevant lines and circles in the image in order to later on use them for the necessary evaluation of the image and the decision-making process.

  7. Semi-automated identification of cones in the human retina using circle Hough transform

    PubMed Central

    Bukowska, Danuta M.; Chew, Avenell L.; Huynh, Emily; Kashani, Irwin; Wan, Sue Ling; Wan, Pak Ming; Chen, Fred K

    2015-01-01

    A large number of human retinal diseases are characterized by a progressive loss of cones, the photoreceptors critical for visual acuity and color perception. Adaptive Optics (AO) imaging presents a potential method to study these cells in vivo. However, AO imaging in ophthalmology is a relatively new phenomenon and quantitative analysis of these images remains difficult and tedious using manual methods. This paper illustrates a novel semi-automated quantitative technique enabling registration of AO images to macular landmarks, cone counting and its radius quantification at specified distances from the foveal center. The new cone counting approach employs the circle Hough transform (cHT) and is compared to automated counting methods, as well as arbitrated manual cone identification. We explore the impact of varying the circle detection parameter on the validity of cHT cone counting and discuss the potential role of using this algorithm in detecting both cones and rods separately. PMID:26713186

  8. A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography

    PubMed Central

    Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M.; Sapiro, Guillermo

    2011-01-01

    A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. PMID:21376655

  9. A novel method about detecting missing holes on the motor carling

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Tan, Hao; Li, Guirong

    2018-03-01

    After a deep analysis on how to use an image processing system to detect the missing holes on the motor carling, we design the whole system combined with the actual production conditions of the motor carling. Afterwards we explain the whole system's hardware and software in detail. We introduce the general functions for the system's hardware and software. Analyzed these general functions, we discuss the modules of the system's hardware and software and the theory to design these modules in detail. The measurement to confirm the area to image processing, edge detection, randomized Hough transform to circle detecting is explained in detail. Finally, the system result tested in the laboratory and in the factory is given out.

  10. Incoherent optical generalized Hough transform: pattern recognition and feature extraction applications

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel; Ferrari, José A.

    2017-05-01

    Pattern recognition and feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital-only methods. We explore an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a pupil mask implemented on a high-contrast spatial light modulator for orientation/shape variation of the template. Real-time can also be achieved. In addition, by thresholding of the GHT and optically inverse transforming, the previously detected features of interest can be extracted.

  11. Geometry-based populated chessboard recognition

    NASA Astrophysics Data System (ADS)

    Xie, Youye; Tang, Gongguo; Hoff, William

    2018-04-01

    Chessboards are commonly used to calibrate cameras, and many robust methods have been developed to recognize the unoccupied boards. However, when the chessboard is populated with chess pieces, such as during an actual game, the problem of recognizing the board is much harder. Challenges include occlusion caused by the chess pieces, the presence of outlier lines and low viewing angles of the chessboard. In this paper, we present a novel approach to address the above challenges and recognize the chessboard. The Canny edge detector and Hough transform are used to capture all possible lines in the scene. The k-means clustering and a k-nearest-neighbors inspired algorithm are applied to cluster and reject the outlier lines based on their Euclidean distances to the nearest neighbors in a scaled Hough transform space. Finally, based on prior knowledge of the chessboard structure, a geometric constraint is used to find the correspondences between image lines and the lines on the chessboard through the homography transformation. The proposed algorithm works for a wide range of the operating angles and achieves high accuracy in experiments.

  12. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis

    PubMed Central

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-01-01

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502

  13. Automated Spatiotemporal Analysis of Fibrils and Coronal Rain Using the Rolling Hough Transform

    NASA Astrophysics Data System (ADS)

    Schad, Thomas

    2017-09-01

    A technique is presented that automates the direction characterization of curvilinear features in multidimensional solar imaging datasets. It is an extension of the Rolling Hough Transform (RHT) technique presented by Clark, Peek, and Putman ( Astrophys. J. 789, 82, 2014), and it excels at rapid quantification of spatial and spatiotemporal feature orientation even for applications with a low signal-to-noise ratio. It operates on a pixel-by-pixel basis within a dataset and reliably quantifies orientation even for locations not centered on a feature ridge, which is used here to derive a quasi-continuous map of the chromospheric fine-structure projection angle. For time-series analysis, a procedure is developed that uses a hierarchical application of the RHT to automatically derive the apparent motion of coronal rain observed off-limb. Essential to the success of this technique is the formulation presented in this article for the RHT error analysis as it provides a means to properly filter results.

  14. A Hough transform global probabilistic approach to multiple-subject diffusion MRI tractography.

    PubMed

    Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M; Sapiro, Guillermo

    2011-08-01

    A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Laser spot tracking based on modified circular Hough transform and motion pattern analysis.

    PubMed

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-10-27

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.

  16. Generalized Hough Transform for Object Classification in the Maritime Domain

    DTIC Science & Technology

    2015-12-01

    and memory storage problems of the GHT in this work . Neural networks have been used to provide excellent solutions to real-world problems in many...1 A. THESIS OBJECTIVE ...............................................................................1 B. RELATED WORK ...SIGNIFICANT CONTRIBUTIONS ......................................................47  B.  RECOMMENDATIONS FOR FUTURE WORK ................................48

  17. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    DTIC Science & Technology

    2017-04-01

    applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing

  18. Automatic extraction of building boundaries using aerial LiDAR data

    NASA Astrophysics Data System (ADS)

    Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian

    2016-01-01

    Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.

  19. Cadastral Map Assembling Using Generalized Hough Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    There are numerous cadastral maps generated by the past land surveying. The raster digitization of these paper maps is in progress. For effective and efficient use of these maps, we have to assemble the set of maps to make them superimposable on other geographic information in a GIS. The problem can be seen as a complex jigsaw puzzle where the pieces are the cadastral sections extracted from the map. We present an automatic solution to this geographic jigsaw puzzle, based on the generalized Hough transformation that detects the longest common boundary between every piece and its neighbors. The experiments have been conducted using the map of Mie Prefecture, Japan and the French cadastral map. The results of the experiments with the French cadastral maps showed that the proposed method, which consists of a flood filling procedure of internal area and detection and normalization of the north arrow direction, is suitable for assembling the cadastral map. The final goal of the process is to integrate every piece of the puzzle into a national geographic reference frame and database.

  20. Sign language indexation within the MPEG-7 framework

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preda, Marius; Preteux, Francoise J.

    1999-06-01

    In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.

  1. Dim target trajectory-associated detection in bright earth limb background

    NASA Astrophysics Data System (ADS)

    Chen, Penghui; Xu, Xiaojian; He, Xiaoyu; Jiang, Yuesong

    2015-09-01

    The intensive emission of earth limb in the field of view of sensors contributes much to the observation images. Due to the low signal-to-noise ratio (SNR), it is a challenge to detect small targets in earth limb background, especially for the detection of point-like targets from a single frame. To improve the target detection, track before detection (TBD) based on the frame sequence is performed. In this paper, a new technique is proposed to determine the target associated trajectories, which jointly carries out background removing, maximum value projection (MVP) and Hough transform. The background of the bright earth limb in the observation images is removed according to the profile characteristics. For a moving target, the corresponding pixels in the MVP image are shifting approximately regularly in time sequence. And the target trajectory is determined by Hough transform according to the pixel characteristics of the target and the clutter and noise. Comparing with traditional frame-by-frame methods, determining associated trajectories from MVP reduces the computation load. Numerical simulations are presented to demonstrate the effectiveness of the approach proposed.

  2. Pattern recognition and feature extraction with an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel

    2016-09-01

    Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.

  3. Evaluation of GPUs as a level-1 track trigger for the High-Luminosity LHC

    NASA Astrophysics Data System (ADS)

    Mohr, H.; Dritschler, T.; Ardila, L. E.; Balzer, M.; Caselle, M.; Chilingaryan, S.; Kopmann, A.; Rota, L.; Schuh, T.; Vogelgesang, M.; Weber, M.

    2017-04-01

    In this work, we investigate the use of GPUs as a way of realizing a low-latency, high-throughput track trigger, using CMS as a showcase example. The CMS detector at the Large Hadron Collider (LHC) will undergo a major upgrade after the long shutdown from 2024 to 2026 when it will enter the high luminosity era. During this upgrade, the silicon tracker will have to be completely replaced. In the High Luminosity operation mode, luminosities of 5-7 × 1034 cm-2s-1 and pileups averaging at 140 events, with a maximum of up to 200 events, will be reached. These changes will require a major update of the triggering system. The demonstrated systems rely on dedicated hardware such as associative memory ASICs and FPGAs. We investigate the use of GPUs as an alternative way of realizing the requirements of the L1 track trigger. To this end we implemeted a Hough transformation track finding step on GPUs and established a low-latency RDMA connection using the PCIe bus. To showcase the benefits of floating point operations, made possible by the use of GPUs, we present a modified algorithm. It uses hexagonal bins for the parameter space and leads to a more truthful representation of the possible track parameters of the individual hits in Hough space. This leads to fewer duplicate candidates and reduces fake track candidates compared to the regular approach. With data-transfer latencies of 2 μs and processing times for the Hough transformation as low as 3.6 μs, we can show that latencies are not as critical as expected. However, computing throughput proves to be challenging due to hardware limitations.

  4. Method of center localization for objects containing concentric arcs

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.

    2015-02-01

    This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.

  5. Detection of pavement cracks using tiled fuzzy Hough transform

    NASA Astrophysics Data System (ADS)

    Mathavan, Senthan; Vaheesan, Kanapathippillai; Kumar, Akash; Chandrakumar, Chanjief; Kamal, Khurram; Rahman, Mujib; Stonecliffe-Jones, Martyn

    2017-09-01

    Surface cracks can be the bellwether of the failure of a road. Hence, crack detection is indispensable for the condition monitoring and quality control of road surfaces. Pavement images have high levels of intensity variation and texture content; hence, the crack detection is generally difficult. Moreover, shallow cracks are very low contrast, making their detection difficult. Therefore, studies on pavement crack detection are active even after years of research. The fuzzy Hough transform is employed, for the first time, to detect cracks from pavement images. A careful consideration is given to the fact that cracks consist of near straight segments embedded in a surface of considerable texture. In this regard, the fuzzy part of the algorithm tackles the segments that are not perfectly straight. Moreover, tiled detection helps reduce the contribution of texture and noise pixels to the accumulator array. The proposed algorithm is compared against a state-of-the-art algorithm for a number of crack datasets, demonstrating its strengths. Precision and recall values of more than 75% are obtained, on different image sets of varying textures and other effects, captured by industrial pavement imagers. The paper also recommends numerical values for parameters used in the proposed method.

  6. Motion estimation of magnetic resonance cardiac images using the Wigner-Ville and hough transforms

    NASA Astrophysics Data System (ADS)

    Carranza, N.; Cristóbal, G.; Bayerl, P.; Neumann, H.

    2007-12-01

    Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation of the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach. More specifically it relies on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The latter is a well-known line and shape detection method that is highly robust against incomplete data and noise. The rationale of using the HT in this context is that it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results in the case of synthetic sequences are compared with an implementation of the variational technique for local and global motion estimation, where it is shown that the results are accurate and robust to noise degradations. Results obtained with real cardiac magnetic resonance images are presented.

  7. A Computer Vision Approach to Identify Einstein Rings and Arcs

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  8. Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms

    NASA Astrophysics Data System (ADS)

    Tleis, Mohamed; Verbeek, Fons J.

    2014-04-01

    Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.

  9. Logo recognition in video by line profile classification

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Hanjalic, Alan

    2003-12-01

    We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.

  10. Image-based spectroscopy for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  11. SU-G-IeP1-01: A Novel MRI Post-Processing Algorithm for Visualization of the Prostate LDR Brachytherapy Seeds and Calcifications Based On B0 Field Inhomogeneity Correction and Hough Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosrati, R; Sunnybrook Health Sciences Centre, Toronto, Ontario; Soliman, A

    Purpose: This study aims at developing an MRI-only workflow for post-implant dosimetry of the prostate LDR brachytherapy seeds. The specific goal here is to develop a post-processing algorithm to produce positive contrast for the seeds and prostatic calcifications and differentiate between them on MR images. Methods: An agar-based phantom incorporating four dummy seeds (I-125) and five calcifications of different sizes (from sheep cortical bone) was constructed. Seeds were placed arbitrarily in the coronal plane. The phantom was scanned with 3T Philips Achieva MR scanner using an 8-channel head coil array. Multi-echo turbo spin echo (ME-TSE) and multi-echo gradient recalled echomore » (ME-GRE) sequences were acquired. Due to minimal susceptibility artifacts around seeds, ME-GRE sequence (flip angle=15; TR/TE=20/2.3/2.3; resolution=0.7×0.7×2mm3) was further processed.The induced field inhomogeneity due to the presence of titaniumencapsulated seeds was corrected using a B0 field map. B0 map was calculated using the ME-GRE sequence by calculating the phase difference at two different echo times. Initially, the product of the first echo and B0 map was calculated. The features corresponding to the seeds were then extracted in three steps: 1) the edge pixels were isolated using “Prewitt” operator; 2) the Hough transform was employed to detect ellipses approximately matching the dimensions of the seeds and 3) at the position and orientation of the detected ellipses an ellipse was drawn on the B0-corrected image. Results: The proposed B0-correction process produced positive contrast for the seeds and calcifications. The Hough transform based on Prewitt edge operator successfully identified all the seeds according to their ellipsoidal shape and dimensions in the edge image. Conclusion: The proposed post-processing algorithm successfully visualized the seeds and calcifications with positive contrast and differentiates between them according to their shapes. Further assessments on more realistic phantoms and patient study are required to validate the outcome.« less

  12. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Wu; Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8; Yuchi Ming

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped;more » the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions: The proposed needle segmentation algorithm is accurate, robust, and suitable for 3D TRUS guided prostate transperineal therapy.« less

  13. Geometric shapes inversion method of space targets by ISAR image segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Chao-ying; Xing, Xiao-yu; Yin, Hong-cheng; Li, Chen-guang; Zeng, Xiang-yun; Xu, Gao-gui

    2017-11-01

    The geometric shape of target is an effective characteristic in the process of space targets recognition. This paper proposed a method of shape inversion of space target based on components segmentation from ISAR image. The Radon transformation, Hough transformation, K-means clustering, triangulation will be introduced into ISAR image processing. Firstly, we use Radon transformation and edge detection to extract space target's main body spindle and solar panel spindle from ISAR image. Then the targets' main body, solar panel, rectangular and circular antenna are segmented from ISAR image based on image detection theory. Finally, the sizes of every structural component are computed. The effectiveness of this method is verified using typical targets' simulation data.

  14. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  15. Efficient iris texture analysis method based on Gabor ordinal measures

    NASA Astrophysics Data System (ADS)

    Tajouri, Imen; Aydi, Walid; Ghorbel, Ahmed; Masmoudi, Nouri

    2017-07-01

    With the remarkably increasing interest directed to the security dimension, the iris recognition process is considered to stand as one of the most versatile technique critically useful for the biometric identification and authentication process. This is mainly due to every individual's unique iris texture. A modestly conceived efficient approach relevant to the feature extraction process is proposed. In the first place, iris zigzag "collarette" is extracted from the rest of the image by means of the circular Hough transform, as it includes the most significant regions lying in the iris texture. In the second place, the linear Hough transform is used for the eyelids' detection purpose while the median filter is applied for the eyelashes' removal. Then, a special technique combining the richness of Gabor features and the compactness of ordinal measures is implemented for the feature extraction process, so that a discriminative feature representation for every individual can be achieved. Subsequently, the modified Hamming distance is used for the matching process. Indeed, the advanced procedure turns out to be reliable, as compared to some of the state-of-the-art approaches, with a recognition rate of 99.98%, 98.12%, and 95.02% on CASIAV1.0, CASIAV3.0, and IIT Delhi V1 iris databases, respectively.

  16. Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging.

    PubMed

    Hong, Keehoon; Hong, Jisoo; Jung, Jae-Hyun; Park, Jae-Hyeung; Lee, Byoungho

    2010-05-24

    We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.

  17. Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process

    NASA Astrophysics Data System (ADS)

    Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.

    2015-02-01

    This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.

  18. Wigner-Hough/Radon Transform for GPS Post-Correlation Integration (Preprint)

    DTIC Science & Technology

    2007-09-01

    Wigner - Ville distribution ( WVD ) is a well known method to estimate instantaneous frequency, which appears as a...Barbarossa, 1996]. In this method, the Wigner - Ville distribution ( WVD ) is used to represent the signal energy in the time-frequency plane while the...its Wigner - Ville 4 distribution or WVD is computed as: ∫ +∞ ∞− −−+= τττ τπ detxtxftW fj 2* ) 2 () 2 (),( (4) where * stands for complex

  19. Kamera-basierte Erkennung von Geschwindigkeitsbeschränkungen auf deutschen Straen

    NASA Astrophysics Data System (ADS)

    Nienhüser, Dennis; Ziegenmeyer, Marco; Gumpp, Thomas; Scholl, Kay-Ulrich; Zöllner, J. Marius; Dillmann, Rüdiger

    An Fahrerassistenzsysteme im industriellen Einsatz werden hohe Anforderungen bezüglich Zuverlässigkeit und Robustheit gestellt. In dieser Arbeit wird die Kombination robuster Verfahren wie der Hough-Transformation und Support-Vektor-Maschinen zu einem Gesamtsystem zur Erkennung von Geschwindigkeitsbeschränkungen beschrieben. Es setzt eine Farbvideokamera als Sensorik ein. Die Evaluation auf Testdaten bestätigt durch die ermittelte hohe Korrektklassifikationsrate bei gleichzeitig geringer Zahl Fehlalarme die Zuverlässigkeit des Systems.

  20. Layering extraction from subsurface radargrams over Greenland and the Martian NPLD by combining wavelet analysis with Hough transforms

    NASA Astrophysics Data System (ADS)

    Xiong, Si-Ting; Muller, Jan-Peter

    2017-04-01

    Extracting lines from an imagery is a solved problem in the field of edge detection. Different to images taken by camera, radargrams are a set of radar echo profiles, which record wave energy reflected by subsurface reflectors, at each location of a radar footprint along the satellite's ground track. The radargrams record where there is a dielectric contrast caused by different deposits, and other subsurface features, such as facies, and internal distributions like porosity and fluids. Among the subsurface features, layering is an important one which reflect the sequence of seasonal or yearly deposits on the ground [1-2]. In the field of image processing, line detection methods, such as the Radon Transform or Hough Transform, are able to extract these subsurface layers from rasterised versions of the echograms. However, due to the attenuation of radar waves whilst propagating through geological media, radargrams sometimes suffer from gradient and high background noise. These attributes of radargrams cause errors in detection when conventional line detection methods are directly applied. In this study, we have developed a continuous wavelet analysis technique to be applied directly to the radar echo profiles in a radargram in order to detect segmented lines, and then a conventional line detection method, such as a Hough transform can be applied to connect these segmented lines. This processing chain is tested by using datasets from a radargram acquired by the Multi-channel Coherent Radar Depth Sounder (MCoRDS) on an airborne platform in Greenland and a radargram acquired by the SHAllow RADar (SHARAD) on board the Mars Reconnaissance Orbiter (MRO) [3] over Martian North Polar Layered Deposits (NPLD). Keywords: Subsurface mapping, Radargram, SHARAD, Greenland, Martian NPLD, Subsurface layering, line detection References: [1] Phillips, R. J., et al. "Mars north polar deposits: Stratigraphy, age, and geodynamical response." Science 320.5880 (2008): 1182-1185. [2] Cutts, James A., and Blake H. Lewis. "Models of climate cycles recorded in Martian polar layered deposits." Icarus 50.2 (1982): 216-244. [3] Plaut J J, Picardi G, Safaeinili A, et al. Subsurface radar sounding of the south polar layered deposits of Mars[J]. science, 2007, 316(5821): 92-95. Acknowledgements: Part of the research leading to these results has received funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement No. 607379 as well as from the China Scholarship Council and the UCL Dean of MAPS fund.

  1. A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifier.

    PubMed

    Coustaty, M; Bertet, K; Visani, M; Ogier, J

    2011-08-01

    In this paper, we propose a new approach for symbol recognition using structural signatures and a Galois lattice as a classifier. The structural signatures are based on topological graphs computed from segments which are extracted from the symbol images by using an adapted Hough transform. These structural signatures-that can be seen as dynamic paths which carry high-level information-are robust toward various transformations. They are classified by using a Galois lattice as a classifier. The performance of the proposed approach is evaluated based on the GREC'03 symbol database, and the experimental results we obtain are encouraging.

  2. Lane detection based on color probability model and fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Jo, Kang-Hyun

    2018-04-01

    In the vehicle driver assistance systems, the accuracy and speed of lane line detection are the most important. This paper is based on color probability model and Fuzzy Local Information C-Means (FLICM) clustering algorithm. The Hough transform and the constraints of structural road are used to detect the lane line accurately. The global map of the lane line is drawn by the lane curve fitting equation. The experimental results show that the algorithm has good robustness.

  3. An algorithm for power line detection and warning based on a millimeter-wave radar video.

    PubMed

    Ma, Qirong; Goshi, Darren S; Shih, Yi-Chi; Sun, Ming-Ting

    2011-12-01

    Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm. © 2011 IEEE

  4. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  5. Extraction of membrane structure in eyeball from MR volumes

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kin, Taichi; Mori, Kensaku

    2017-03-01

    This paper presents an accurate extraction method of spherical shaped membrane structures in the eyeball from MR volumes. In ophthalmic surgery, operation field is limited to a small region. Patient specific surgical simulation is useful to reduce complications. Understanding of tissue structure in the eyeball of a patient is required to achieve patient specific surgical simulations. Previous extraction methods of tissue structure in the eyeball use optical coherence tomography (OCT) images. Although OCT images have high resolution, imaging regions are limited to very small. Global structure extraction of the eyeball is difficult from OCT images. We propose an extraction method of spherical shaped membrane structures including the sclerotic coat, choroid, and retina. This method is applied to a T2 weighted MR volume of the head region. MR volume can capture tissue structure of whole eyeball. Because we use MR volumes, out method extracts whole membrane structures in the eyeball. We roughly extract membrane structures by applying a sheet structure enhancement filter. The rough extraction result includes parts of the membrane structures. Then, we apply the Hough transform to extract a sphere structure from the voxels set of the rough extraction result. The Hough transform finds a sphere structure from the rough extraction result. An experimental result using a T2 weighted MR volume of the head region showed that the proposed method can extract spherical shaped membrane structures accurately.

  6. A novel iris localization algorithm using correlation filtering

    NASA Astrophysics Data System (ADS)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  7. Compact optical processor for Hough and frequency domain features

    NASA Astrophysics Data System (ADS)

    Ott, Peter

    1996-11-01

    Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done optically by a rotating prism. We realize it on a fast FLC- SLM of our lab as input device. The filters can be implemented on the same type of SLM with 128 by 128 square pixels of size, resulting in a total length of the lens of less than 50cm.

  8. Automatic extraction of via in the CT image of PCB

    NASA Astrophysics Data System (ADS)

    Liu, Xifeng; Hu, Yuwei

    2018-04-01

    In modern industry, the nondestructive testing of printed circuit board (PCB) can prevent effectively the system failure and is becoming more and more important. In order to detect the via in the PCB base on the CT image automatically accurately and reliably, a novel algorithm for via extraction based on weighting stack combining the morphologic character of via is designed. Every slice data in the vertical direction of the PCB is superimposed to enhanced vias target. The OTSU algorithm is used to segment the slice image. OTSU algorithm of thresholding gray level images is efficient for separating an image into two classes where two types of fairly distinct classes exist in the image. Randomized Hough Transform was used to locate the region of via in the segmented binary image. Then the 3D reconstruction of via based on sequence slice images was done by volume rendering. The accuracy of via positioning and detecting from a CT images of PCB was demonstrated by proposed algorithm. It was found that the method is good in veracity and stability for detecting of via in three dimensional.

  9. Hough transform for clustered microcalcifications detection in full-field digital mammograms

    NASA Astrophysics Data System (ADS)

    Fanizzi, A.; Basile, T. M. A.; Losurdo, L.; Amoroso, N.; Bellotti, R.; Bottigli, U.; Dentamaro, R.; Didonna, V.; Fausto, A.; Massafra, R.; Moschetta, M.; Tamborra, P.; Tangaro, S.; La Forgia, D.

    2017-09-01

    Many screening programs use mammography as principal diagnostic tool for detecting breast cancer at a very early stage. Despite the efficacy of the mammograms in highlighting breast diseases, the detection of some lesions is still doubtless for radiologists. In particular, the extremely minute and elongated salt-like particles of microcalcifications are sometimes no larger than 0.1 mm and represent approximately half of all cancer detected by means of mammograms. Hence the need for automatic tools able to support radiologists in their work. Here, we propose a computer assisted diagnostic tool to support radiologists in identifying microcalcifications in full (native) digital mammographic images. The proposed CAD system consists of a pre-processing step, that improves contrast and reduces noise by applying Sobel edge detection algorithm and Gaussian filter, followed by a microcalcification detection step performed by exploiting the circular Hough transform. The procedure performance was tested on 200 images coming from the Breast Cancer Digital Repository (BCDR), a publicly available database. The automatically detected clusters of microcalcifications were evaluated by skilled radiologists which asses the validity of the correctly identified regions of interest as well as the system error in case of missed clustered microcalcifications. The system performance was evaluated in terms of Sensitivity and False Positives per images (FPi) rate resulting comparable to the state-of-art approaches. The proposed model was able to accurately predict the microcalcification clusters obtaining performances (sensibility = 91.78% and FPi rate = 3.99) which favorably compare to other state-of-the-art approaches.

  10. New algorithm for detecting smaller retinal blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  11. Magnetically aligned H I fibers and the rolling hough transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, S. E.; Putman, M. E.; Peek, J. E. G.

    2014-07-01

    We present observations of a new group of structures in the diffuse Galactic interstellar medium (ISM): slender, linear H I features we dub 'fibers' that extend for many degrees at high Galactic latitude. To characterize and measure the extent and strength of these fibers, we present the Rolling Hough Transform, a new machine vision method for parameterizing the coherent linearity of structures in the image plane. With this powerful new tool we show that the fibers are oriented along the interstellar magnetic field as probed by starlight polarization. We find that these low column density (N{sub H} {sub I}≃5×10{sup 18}more » cm{sup –2}) fiber features are most likely a component of the local cavity wall, about 100 pc away. The H I data we use to demonstrate this alignment at high latitude are from the Galactic Arecibo L-Band Feed Array H I (GALFA-H I) Survey and the Parkes Galactic All Sky Survey. We find better alignment in the higher resolution GALFA-H I data, where the fibers are more visually evident. This trend continues in our investigation of magnetically aligned linear features in the Riegel-Crutcher H I cold cloud, detected in the Southern Galactic Plane Survey. We propose an application of the RHT for estimating the field strength in such a cloud, based on the Chandrasekhar-Fermi method. We conclude that data-driven, quantitative studies of ISM morphology can be very powerful predictors of underlying physical quantities.« less

  12. Extraction of Black Hole Shadows Using Ridge Filtering and the Circle Hough Transform

    NASA Astrophysics Data System (ADS)

    Hennessey, Ryan; Akiyama, Kazunori; Fish, Vincent

    2018-01-01

    Supermassive black holes are widely considered to reside at the center of most large galaxies. One of the foremost tasks in modern astronomy is to image the centers of local galaxies, such as that of Messier 87 (M87) and Sagittarius A* at the center of our own Milky Way, to gain the first glimpses of black holes and their surrounding structures. Using data obtained from the Event Horizon Telescope (EHT), a global collection of millimeter-wavelength telescopes designed to perform very long baseline interferometry, new imaging techniques will likely be able to yield images of these structures at fine enough resolutions to compare with the predictions of general relativity and give us more insight into the formation of black holes, their surrounding jets and accretion disks, and galaxies themselves. Techniques to extract features from these images are already being developed. In this work, we present a new method for measuring the size of the black hole shadow, a feature that encodes information about the black hole mass and spin, using ridge filtering and the circle Hough transform. Previous methods have succeeded in extracting the black hole shadow with an accuracy of about 10- 20%, but using this new technique we are able to measure the shadow size with even finer accuracy. Our work indicates that the EHT will be able to significantly reduce the uncertainty in the estimate of the mass of the supermassive black hole in M87.

  13. Localization of skeletal and aortic landmarks in trauma CT data based on the discriminative generalized Hough transform

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Hansis, Eberhard; Weese, Jürgen; Carolus, Heike

    2016-03-01

    Computed tomography is the modality of choice for poly-trauma patients to assess rapidly skeletal and vascular integrity of the whole body. Often several scans with and without contrast medium or with different spatial resolution are acquired. Efficient reading of the resulting extensive set of image data is vital, since it is often time critical to initiate the necessary therapeutic actions. A set of automatically found landmarks can facilitate navigation in the data and enables anatomy oriented viewing. Following this intention, we selected a comprehensive set of 17 skeletal and 5 aortic landmarks. Landmark localization models for the Discriminative Generalized Hough Transform (DGHT) were automatically created based on a set of about 20 training images with ground truth landmark positions. A hierarchical setup with 4 resolution levels was used. Localization results were evaluated on a separate test set, consisting of 50 to 128 images (depending on the landmark) with available ground truth landmark locations. The image data covers a large amount of variability caused by differences of field-of-view, resolution, contrast agent, patient gender and pathologies. The median localization error for the set of aortic landmarks was 14.4 mm and for the set of skeleton landmarks 5.5 mm. Median localization errors for individual landmarks ranged from 3.0 mm to 31.0 mm. The runtime performance for the whole landmark set is about 5s on a typical PC.

  14. Automatic needle segmentation in 3D ultrasound images using 3D improved Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgen

    2008-03-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using a needle-like RF button electrode is widely used to destroy tumor cells or stop bleeding. To avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment, 3D US guidance system was developed. In this paper, a new automated technique, the 3D Improved Hough Transform (3DIHT) algorithm, which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance, was presented. Based on the coarse-fine search strategy and a four parameter representation of lines in 3D space, 3DIHT algorithm can segment needles quickly, accurately and robustly. The technique was evaluated using the 3D US images acquired by scanning a water phantom. The segmentation position deviation of the line was less than 2mm and angular deviation was much less than 2°. The average computational time measured on a Pentium IV 2.80GHz PC computer with a 381×381×250 image was less than 2s.

  15. Text, photo, and line extraction in scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2012-07-01

    We propose a page layout analysis algorithm to classify a scanned document into different regions such as text, photo, or strong lines. The proposed scheme consists of five modules. The first module performs several image preprocessing techniques such as image scaling, filtering, color space conversion, and gamma correction to enhance the scanned image quality and reduce the computation time in later stages. Text detection is applied in the second module wherein wavelet transform and run-length encoding are employed to generate and validate text regions, respectively. The third module uses a Markov random field based block-wise segmentation that employs a basis vector projection technique with maximum a posteriori probability optimization to detect photo regions. In the fourth module, methods for edge detection, edge linking, line-segment fitting, and Hough transform are utilized to detect strong edges and lines. In the last module, the resultant text, photo, and edge maps are combined to generate a page layout map using K-Means clustering. The proposed algorithm has been tested on several hundred documents that contain simple and complex page layout structures and contents such as articles, magazines, business cards, dictionaries, and newsletters, and compared against state-of-the-art page-segmentation techniques with benchmark performance. The results indicate that our methodology achieves an average of ˜89% classification accuracy in text, photo, and background regions.

  16. Design and Implementation of Pointer-Type Multi Meters Intelligent Recognition Device Based on ARM Platform

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Luo, Wang; Fan, Qiang; Peng, Qiwei; Cai, Yiting; Yao, Yiyang; Xu, Changfu

    2018-01-01

    This paper adopts a low power consumption ARM Hisilicon mobile processing platform and OV4689 camera, combined with a new skeleton extraction based on distance transform algorithm and the improved Hough algorithm for multi meters real-time reading. The design and implementation of the device were completed. Experimental results shows that The average error of measurement was 0.005MPa, and the average reading time was 5s. The device had good stability and high accuracy which meets the needs of practical application.

  17. A hybrid spatiotemporal and Hough-based motion estimation approach applied to magnetic resonance cardiac images

    NASA Astrophysics Data System (ADS)

    Carranza, N.; Cristóbal, G.; Sroubek, F.; Ledesma-Carbayo, M. J.; Santos, A.

    2006-08-01

    Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation to the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach, more specifically on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The later is a well-known line and shape detection method very robust against incomplete data and noise. The rationale of using the HT in this context is because it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results with synthetic sequences are compared against an implementation of the variational technique for local and global motion estimation, where it is shown that the results obtained here are accurate and robust to noise degradations. Real cardiac magnetic resonance images have been tested and evaluated with the current method.

  18. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    PubMed

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Seismic spectrograms analysis applying the Hough transform to estimate the front speed of mass movements: Application to snow avalanches

    NASA Astrophysics Data System (ADS)

    Flores-Marquez, L.; Suriñach-Cornet, E., Sr.

    2017-12-01

    Seismic signals generated by snow avalanches and other mass movements are analyzed in their spectrogram representation. Spectrogram displays the evolution in time of the frequency content of the signals. The spectrogram of a seismic signal of a station to which a sliding mass, such as a snow avalanche, approaches, exhibits a triangular time / frequency signature. This increase in its higher frequency content over time is a consequence of the attenuation of the waves propagating in a media. Recognition of characteristic footprints in a spectrogram could help to identify and characterize diverse mass movement events such as landslides or snow avalanches. In order to recognize spectrogram features of seismic signals of Alpine snow avalanches, we propose an algorithm based on the Hough transform. The proposed algorithm is applied on an edge representation image of the seismic spectrogram obtained after fixing a threshold filter to the spectrogram, which enhances the most interesting frequencies of the seismogram that appear over time. This enables us to identify parameters (slopes) that correspond to the speeds associated with the type of snow avalanches, such as, powder, dense or transitional snow avalanches. The data analyzed in this work correspond to twenty different seismic signals generated by snow avalanches artificially released in the experimental site of Vallée de la Sionne (VDLS, SLF, Switzerland). The shape of the signal spectrograms are linked to the flow regimes previously identified. Our findings show that some ranges of speeds are inherent to the type of avalanche.

  20. An Automated Method of Scanning Probe Microscopy (SPM) Data Analysis and Reactive Site Tracking for Mineral-Water Interface Reactions Observed at the Nanometer Scale

    NASA Astrophysics Data System (ADS)

    Campbell, B. D.; Higgins, S. R.

    2008-12-01

    Developing a method for bridging the gap between macroscopic and microscopic measurements of reaction kinetics at the mineral-water interface has important implications in geological and chemical fields. Investigating these reactions on the nanometer scale with SPM is often limited by image analysis and data extraction due to the large quantity of data usually obtained in SPM experiments. Here we present a computer algorithm for automated analysis of mineral-water interface reactions. This algorithm automates the analysis of sequential SPM images by identifying the kinetically active surface sites (i.e., step edges), and by tracking the displacement of these sites from image to image. The step edge positions in each image are readily identified and tracked through time by a standard edge detection algorithm followed by statistical analysis on the Hough Transform of the edge-mapped image. By quantifying this displacement as a function of time, the rate of step edge displacement is determined. Furthermore, the total edge length, also determined from analysis of the Hough Transform, combined with the computed step speed, yields the surface area normalized rate of the reaction. The algorithm was applied to a study of the spiral growth of the calcite(104) surface from supersaturated solutions, yielding results almost 20 times faster than performing this analysis by hand, with results being statistically similar for both analysis methods. This advance in analysis of kinetic data from SPM images will facilitate the building of experimental databases on the microscopic kinetics of mineral-water interface reactions.

  1. Multi-slice ultrasound image calibration of an intelligent skin-marker for soft tissue artefact compensation.

    PubMed

    Masum, M A; Pickering, M R; Lambert, A J; Scarvell, J M; Smith, P N

    2017-09-06

    In this paper, a novel multi-slice ultrasound (US) image calibration of an intelligent skin-marker used for soft tissue artefact compensation is proposed to align and orient image slices in an exact H-shaped pattern. Multi-slice calibration is complex, however, in the proposed method, a phantom based visual alignment followed by transform parameters estimation greatly reduces the complexity and provides sufficient accuracy. In this approach, the Hough Transform (HT) is used to further enhance the image features which originate from the image feature enhancing elements integrated into the physical phantom model, thus reducing feature detection uncertainty. In this framework, slice by slice image alignment and calibration are carried out and this provides manual ease and convenience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery.

    PubMed

    Tian, Shu; Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei

    2015-01-01

    The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness.

  3. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery

    PubMed Central

    Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei

    2015-01-01

    The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness. PMID:26693249

  4. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  5. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  6. First low frequency all-sky search for continuous gravitational wave signals

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Amariutei, D. V.; Andersen, M.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Ashton, G.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Bartlett, J.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Behnke, B.; Bejger, M.; Belczynski, C.; Bell, A. S.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Branco, V.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Colombini, M.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Canton, T. Dal; Damjanic, M. D.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Dominguez, E.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Edwards, M.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J. M.; Eikenberry, S. S.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; Gergely, L. Á.; Germain, V.; Ghosh, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gleason, J. R.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez, J.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C. J.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammer, D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hoelscher-Obermaier, J.; Hofman, D.; Hollitt, S. E.; Holt, K.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Islas, G.; Isler, J. C.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacobson, M. B.; Jang, H.; Jaranowski, P.; Jawahar, S.; Ji, Y.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karlen, J. L.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kerrigan, J.; Key, J. S.; Khalili, F. Y.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, K.; Kim, N. G.; Kim, N.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J. T.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, A.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, J. P.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; Macarthur, J.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Madden-Fong, D. X.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N. M.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, A.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Okounkova, M.; Oppermann, P.; Oram, R.; O'Reilly, B.; Ortega, W. E.; O'Shaughnessy, R.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C. T.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Papa, M. A.; Paris, H. R.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patrick, Z.; Pedraza, M.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J. H.; Poggiani, R.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rodger, A. S.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Saleem, M.; Salemi, F.; Sammut, L.; Sanchez, E.; Sandberg, V.; Sanders, J. R.; Santiago-Prieto, I.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Sevigny, A.; Shaddock, D. A.; Shaffery, P.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Steplewski, S.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Tse, M.; Turconi, M.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; van den Broeck, C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, M.; Wade, L. E.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Williams, K. J.; Williams, L.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zhang, Fan; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-02-01

    In this paper we present the results of the first low frequency all-sky search of continuous gravitational wave signals conducted on Virgo VSR2 and VSR4 data. The search covered the full sky, a frequency range between 20 and 128 Hz with a range of spin-down between -1.0 ×10-10 and +1.5 ×10-11 Hz /s , and was based on a hierarchical approach. The starting point was a set of short fast Fourier transforms, of length 8192 s, built from the calibrated strain data. Aggressive data cleaning, in both the time and frequency domains, has been done in order to remove, as much as possible, the effect of disturbances of instrumental origin. On each data set a number of candidates has been selected, using the FrequencyHough transform in an incoherent step. Only coincident candidates among VSR2 and VSR4 have been examined in order to strongly reduce the false alarm probability, and the most significant candidates have been selected. The criteria we have used for candidate selection and for the coincidence step greatly reduce the harmful effect of large instrumental artifacts. Selected candidates have been subject to a follow-up by constructing a new set of longer fast Fourier transforms followed by a further incoherent analysis, still based on the FrequencyHough transform. No evidence for continuous gravitational wave signals was found, and therefore we have set a population-based joint VSR2-VSR4 90% confidence level upper limit on the dimensionless gravitational wave strain in the frequency range between 20 and 128 Hz. This is the first all-sky search for continuous gravitational waves conducted, on data of ground-based interferometric detectors, at frequencies below 50 Hz. We set upper limits in the range between about 1 0-24 and 2 ×10-23 at most frequencies. Our upper limits on signal strain show an improvement of up to a factor of ˜2 with respect to the results of previous all-sky searches at frequencies below 80 Hz.

  7. Application of a Hough Search for Continuous Gravitational Waves on Data from the Fifth LIGO Science Run

    NASA Technical Reports Server (NTRS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Adams, C.; Adams, T.; Adhikari, R. X.; hide

    2014-01-01

    We report on an all-sky search for periodic gravitational waves in the frequency range 50-1000 Hertz with the first derivative of frequency in the range -8.9 × 10(exp -10) Hertz per second to zero in two years of data collected during LIGO's fifth science run. Our results employ a Hough transform technique, introducing a chi(sup 2) test and analysis of coincidences between the signal levels in years 1 and 2 of observations that offers a significant improvement in the product of strain sensitivity with compute cycles per data sample compared to previously published searches. Since our search yields no surviving candidates, we present results taking the form of frequency dependent, 95% confidence upper limits on the strain amplitude h(sub 0). The most stringent upper limit from year 1 is 1.0 × 10(exp -24) in the 158.00-158.25 Hertz band. In year 2, the most stringent upper limit is 8.9 × 10(exp -25) in the 146.50-146.75 Hertz band. This improved detection pipeline, which is computationally efficient by at least two orders of magnitude better than our flagship Einstein@Home search, will be important for 'quicklook' searches in the Advanced LIGO and Virgo detector era.

  8. Application of a Hough search for continuous gravitational waves on data from the fifth LIGO science run

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barker, D.; Barnum, S. H.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th S.; Bebronne, M.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Bergmann, G.; Berliner, J. M.; Bersanetti, D.; Bertolini, A.; Bessis, D.; Betzwieser, J.; Beyersdorf, P. T.; Bhadbhade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bowers, J.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brannen, C. A.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Colombini, M.; Constancio, M., Jr.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; Debreczeni, G.; Degallaix, J.; Deleeuw, E.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Dietz, A.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Dmitry, K.; Donovan, F.; Dooley, K. L.; Doravari, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edwards, M.; Effler, A.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; EndrHoczi, G.; Essick, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R.; Flaminio, R.; Foley, E.; Foley, S.; Forsi, E.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B.; Hall, E.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Heefner, J.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hong, T.; Hooper, S.; Horrom, T.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hua, Z.; Huang, V.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Iafrate, J.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jang, Y. J.; Jaranowski, P.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kucharczyk, C.; Kudla, S.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Leaci, P.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, J.; Leonardi, M.; Leong, J. R.; Le Roux, A.; Leroy, N.; Letendre, N.; Levine, B.; Lewis, J. B.; Lhuillier, V.; Li, T. G. F.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Liu, F.; Liu, H.; Liu, Y.; Liu, Z.; Lloyd, D.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Luan, J.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; May, G.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meier, T.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Mokler, F.; Moraru, D.; Moreno, G.; Morgado, N.; Mori, T.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Nash, T.; Naticchioni, L.; Nayak, R.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nishida, E.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; Ortega Larcher, W.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Ou, J.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Peiris, P.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pindor, B.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poole, V.; Poux, C.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quintero, E.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Roever, C.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sanders, J.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Soden, K.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stevens, D.; Stochino, A.; Stone, R.; Strain, K. A.; Straniero, N.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Talukder, D.; Tang, L.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vlcek, B.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vrinceanu, D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Walker, M.; Wallace, L.; Wan, Y.; Wang, J.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wibowo, S.; Wiesner, K.; Wilkinson, C.; Williams, L.; Williams, R.; Williams, T.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yum, H.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zhu, H.; Zhu, X. J.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2014-04-01

    We report on an all-sky search for periodic gravitational waves in the frequency range 50-1000 Hz with the first derivative of frequency in the range -8.9 × 10-10 Hz s-1 to zero in two years of data collected during LIGO’s fifth science run. Our results employ a Hough transform technique, introducing a χ2 test and analysis of coincidences between the signal levels in years 1 and 2 of observations that offers a significant improvement in the product of strain sensitivity with compute cycles per data sample compared to previously published searches. Since our search yields no surviving candidates, we present results taking the form of frequency dependent, 95% confidence upper limits on the strain amplitude h0. The most stringent upper limit from year 1 is 1.0 × 10-24 in the 158.00-158.25 Hz band. In year 2, the most stringent upper limit is 8.9 × 10-25 in the 146.50-146.75 Hz band. This improved detection pipeline, which is computationally efficient by at least two orders of magnitude better than our flagship Einstein@Home search, will be important for ‘quick-look’ searches in the Advanced LIGO and Virgo detector era.

  9. Comparison of two hardware-based hit filtering methods for trackers in high-pileup environments

    NASA Astrophysics Data System (ADS)

    Gradin, J.; Mårtensson, M.; Brenner, R.

    2018-04-01

    As experiments in high energy physics aim to measure increasingly rare processes, the experiments continually strive to increase the expected signal yields. In the case of the High Luminosity upgrade of the LHC, the luminosity is raised by increasing the number of simultaneous proton-proton interactions, so-called pile-up. This increases the expected yields of signal and background processes alike. The signal is embedded in a large background of processes that mimic that of signal events. It is therefore imperative for the experiments to develop new triggering methods to effectively distinguish the interesting events from the background. We present a comparison of two methods for filtering detector hits to be used for triggering on particle tracks: one based on a pattern matching technique using Associative Memory (AM) chips and the other based on the Hough transform. Their efficiency and hit rejection are evaluated for proton-proton collisions with varying amounts of pile-up using a simulation of a generic silicon tracking detector. It is found that, while both methods are feasible options for a track trigger with single muon efficiencies around 98–99%, the AM based pattern matching produces a lower number of hit combinations with respect to the Hough transform whilst keeping more of the true signal hits. We also present the effect on the two methods of increasing the amount of support material in the detector and of introducing inefficiencies by deactivating detector modules. The increased support material has negligable effects on the efficiency for both methods, while dropping 5% (10%) of the available modules decreases the efficiency to about 95% (87%) for both methods, irrespective of the amount of pile-up.

  10. Munitions related feature extraction from LIDAR data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniquesmore » for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.« less

  11. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  12. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  13. Computerized organ localization in abdominal CT volume with context-driven generalized Hough transform

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Li, Qiang

    2014-03-01

    Fast localization of organs is a key step in computer-aided detection of lesions and in image guided radiation therapy. We developed a context-driven Generalized Hough Transform (GHT) for robust localization of organ-of-interests (OOIs) in a CT volume. Conventional GHT locates the center of an organ by looking-up center locations of pre-learned organs with "matching" edges. It often suffers from mislocalization because "similar" edges in vicinity may attract the prelearned organs towards wrong places. The proposed method not only uses information from organ's own shape but also takes advantage of nearby "similar" edge structures. First, multiple GHT co-existing look-up tables (cLUT) were constructed from a set of training shapes of different organs. Each cLUT represented the spatial relationship between the center of the OOI and the shape of a co-existing organ. Second, the OOI center in a test image was determined using GHT with each cLUT separately. Third, the final localization of OOI was based on weighted combination of the centers obtained in the second stage. The training set consisted of 10 CT volumes with manually segmented OOIs including liver, spleen and kidneys. The method was tested on a set of 25 abdominal CT scans. Context-driven GHT correctly located all OOIs in the test image and gave localization errors of 19.5±9.0, 12.8±7.3, 9.4±4.6 and 8.6±4.1 mm for liver, spleen, left and right kidney respectively. Conventional GHT mis-located 8 out of 100 organs and its localization errors were 26.0±32.6, 14.1±10.6, 30.1±42.6 and 23.6±39.7mm for liver, spleen, left and right kidney respectively.

  14. A new method of inshore ship detection in high-resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Hu, Qifeng; Du, Yaling; Jiang, Yunqiu; Ming, Delie

    2015-10-01

    Ship as an important military target and water transportation, of which the detection has great significance. In the military field, the automatic detection of ships can be used to monitor ship dynamic in the harbor and maritime of enemy, and then analyze the enemy naval power. In civilian field, the automatic detection of ships can be used in monitoring transportation of harbor and illegal behaviors such as illegal fishing, smuggling and pirates, etc. In recent years, research of ship detection is mainly concentrated in three categories: forward-looking infrared images, downward-looking SAR image, and optical remote sensing images with sea background. Little research has been done into ship detection of optical remote sensing images with harbor background, as the gray-scale and texture features of ships are similar to the coast in high-resolution optical remote sensing images. In this paper, we put forward an effective harbor ship target detection method. First of all, in order to overcome the shortage of the traditional difference method in obtaining histogram valley as the segmentation threshold, we propose an iterative histogram valley segmentation method which separates the harbor and ships from the water quite well. Secondly, as landing ships in optical remote sensing images usually lead to discontinuous harbor edges, we use Hough Transform method to extract harbor edges. First, lines are detected by Hough Transform. Then, lines that have similar slope are connected into a new line, thus we access continuous harbor edges. Secondary segmentation on the result of the land-and-sea separation, we eventually get the ships. At last, we calculate the aspect ratio of the ROIs, thereby remove those targets which are not ship. The experiment results show that our method has good robustness and can tolerate a certain degree of noise and occlusion.

  15. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  16. Paraxial diffractive elements for space-variant linear transforms

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  17. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  18. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  19. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  20. Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata.

    PubMed

    Ghanizadeh, Afshin; Abarghouei, Amir Atapour; Sinaie, Saman; Saad, Puteh; Shamsuddin, Siti Mariyam

    2011-07-01

    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.

  1. Automated detection of jet contrails using the AVHRR split window

    NASA Technical Reports Server (NTRS)

    Engelstad, M.; Sengupta, S. K.; Lee, T.; Welch, R. M.

    1992-01-01

    This paper investigates the automated detection of jet contrails using data from the Advanced Very High Resolution Radiometer. A preliminary algorithm subtracts the 11.8-micron image from the 10.8-micron image, creating a difference image on which contrails are enhanced. Then a three-stage algorithm searches the difference image for the nearly-straight line segments which characterize contrails. First, the algorithm searches for elevated, linear patterns called 'ridges'. Second, it applies a Hough transform to the detected ridges to locate nearly-straight lines. Third, the algorithm determines which of the nearly-straight lines are likely to be contrails. The paper applies this technique to several test scenes.

  2. Image processing for safety assessment in civil engineering.

    PubMed

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  3. Security Quality Requirements Engineering (SQUARE): Case Study Phase III

    DTIC Science & Technology

    2006-05-01

    Security Quality Requirements Engineering (SQUARE): Case Study Phase III Lydia Chung Frank Hung Eric Hough Don Ojoko-Adams Advisor...Engineering (SQUARE): Case Study Phase III CMU/SEI-2006-SR-003 Lydia Chung Frank Hung Eric Hough Don Ojoko-Adams Advisor Nancy R. Mead...1 1.1 The SQUARE Process ............................................................................... 1 1.2 Case Study Clients

  4. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  5. Pattern recognition of concrete surface cracks and defects using integrated image processing algorithms

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Hortinela, Carlos C.; Garcia, Ramon G.; Baylon, Sunnycille; Ignacio, Alexander Joshua; Rivera, Marco Antonio; Sebastian, Jaimie

    2017-06-01

    Pattern recognition of concrete surface crack defects is very important in determining stability of structure like building, roads or bridges. Surface crack is one of the subjects in inspection, diagnosis, and maintenance as well as life prediction for the safety of the structures. Traditionally determining defects and cracks on concrete surfaces are done manually by inspection. Moreover, any internal defects on the concrete would require destructive testing for detection. The researchers created an automated surface crack detection for concrete using image processing techniques including Hough transform, LoG weighted, Dilation, Grayscale, Canny Edge Detection and Haar Wavelet Transform. An automatic surface crack detection robot is designed to capture the concrete surface by sectoring method. Surface crack classification was done with the use of Haar trained cascade object detector that uses both positive samples and negative samples which proved that it is possible to effectively identify the surface crack defects.

  6. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  7. Reply to “Comment on ‘Ground motions from the 2015 Mw 7.8 Gorkha, Nepal, earthquake constrained by a detailed assessment of macroseismic data’ by Stacey S. Martin, Susan E. Hough, and Charleen Hung” by Andrea Tertulliani, Laura Graziani, Corrado Castellano, Alessandra Maramai, and Antonio Rossi

    USGS Publications Warehouse

    Martin, Stacey S.; Hough, Susan E.

    2016-01-01

    We thank Andrea Tertulliani and his colleagues for their interest in our article on the 2015 Gorkha earthquake (Martin, Hough, et al., 2015), and for their comments pertaining to our study (Tertulliani et al., 2016). Indeed, as they note, a comprehensive assessment of macroseismic effects for an earthquake with far‐reaching effects as that of Gorkha is not only critically important but is also an extremely difficult undertaking. In the absence of a widely known web‐based system, employing a well‐calibrated algorithm with which to collect and systematically assess macroseismic information (e.g., Wald et al., 1999; Coppola et al., 2010; Bossu et al., 2015) in the Indian subcontinent, one is left with two approaches to characterize effects of an event such as the Gorkha earthquake: a comprehensive ground‐based survey such as the one undertaken in India following the 2001 Bhuj earthquake (Pande and Kayal, 2003), or an assessment such as Martin, Hough, et al. (2015) akin to other contemporary studies (e.g., Nuttli, 1973; Sieh, 1978; Meltzner and Wald, 1998; Martin and Szeliga, 2010; Ambraseys and Bilham, 2012; Mahajan et al., 2012; Gupta et al., 2013; Singh et al., 2013; Hough and Martin, 2015; Martin and Hough, 2015; Martin, Bradley, et al., 2015; Ribeiro et al., 2015), based primarily upon media reports and other available documentary accounts.

  8. 3D Forest: An application for descriptions of three-dimensional forest structures using terrestrial LiDAR

    PubMed Central

    Krůček, Martin; Vrška, Tomáš; Král, Kamil

    2017-01-01

    Terrestrial laser scanning is a powerful technology for capturing the three-dimensional structure of forests with a high level of detail and accuracy. Over the last decade, many algorithms have been developed to extract various tree parameters from terrestrial laser scanning data. Here we present 3D Forest, an open-source non-platform-specific software application with an easy-to-use graphical user interface with the compilation of algorithms focused on the forest environment and extraction of tree parameters. The current version (0.42) extracts important parameters of forest structure from the terrestrial laser scanning data, such as stem positions (X, Y, Z), tree heights, diameters at breast height (DBH), as well as more advanced parameters such as tree planar projections, stem profiles or detailed crown parameters including convex and concave crown surface and volume. Moreover, 3D Forest provides quantitative measures of between-crown interactions and their real arrangement in 3D space. 3D Forest also includes an original algorithm of automatic tree segmentation and crown segmentation. Comparison with field data measurements showed no significant difference in measuring DBH or tree height using 3D Forest, although for DBH only the Randomized Hough Transform algorithm proved to be sufficiently resistant to noise and provided results comparable to traditional field measurements. PMID:28472167

  9. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  10. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  11. A Misleading Review of Response Bias: Comment on McGrath, Mitchell, Kim, and Hough (2010)

    ERIC Educational Resources Information Center

    Rohling, Martin L.; Larrabee, Glenn J.; Greiffenstein, Manfred F.; Ben-Porath, Yossef S.; Lees-Haley, Paul; Green, Paul; Greve, Kevin W.

    2011-01-01

    In the May 2010 issue of "Psychological Bulletin," R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such…

  12. Locomotive track detection for underground

    NASA Astrophysics Data System (ADS)

    Ma, Zhonglei; Lang, Wenhui; Li, Xiaoming; Wei, Xing

    2017-08-01

    In order to improve the PC-based track detection system, this paper proposes a method to detect linear track for underground locomotive based on DSP + FPGA. Firstly, the analog signal outputted from the camera is sampled by A / D chip. Then the collected digital signal is preprocessed by FPGA. Secondly, the output signal of FPGA is transmitted to DSP via EMIF port. Subsequently, the adaptive threshold edge detection, polar angle and radius constrain based Hough transform are implemented by DSP. Lastly, the detected track information is transmitted to host computer through Ethernet interface. The experimental results show that the system can not only meet the requirements of real-time detection, but also has good robustness.

  13. Learning to segment mouse embryo cells

    NASA Astrophysics Data System (ADS)

    León, Juan; Pardo, Alejandro; Arbeláez, Pablo

    2017-11-01

    Recent advances in microscopy enable the capture of temporal sequences during cell development stages. However, the study of such sequences is a complex task and time consuming task. In this paper we propose an automatic strategy to adders the problem of semantic and instance segmentation of mouse embryos using NYU's Mouse Embryo Tracking Database. We obtain our instance proposals as refined predictions from the generalized hough transform, using prior knowledge of the embryo's locations and their current cell stage. We use two main approaches to learn the priors: Hand crafted features and automatic learned features. Our strategy increases the baseline jaccard index from 0.12 up to 0.24 using hand crafted features and 0.28 by using automatic learned ones.

  14. Artificial intelligence tools for pattern recognition

    NASA Astrophysics Data System (ADS)

    Acevedo, Elena; Acevedo, Antonio; Felipe, Federico; Avilés, Pedro

    2017-06-01

    In this work, we present a system for pattern recognition that combines the power of genetic algorithms for solving problems and the efficiency of the morphological associative memories. We use a set of 48 tire prints divided into 8 brands of tires. The images have dimensions of 200 x 200 pixels. We applied Hough transform to obtain lines as main features. The number of lines obtained is 449. The genetic algorithm reduces the number of features to ten suitable lines that give thus the 100% of recognition. Morphological associative memories were used as evaluation function. The selection algorithms were Tournament and Roulette wheel. For reproduction, we applied one-point, two-point and uniform crossover.

  15. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  16. Fourier-based quantification of renal glomeruli size using Hough transform and shape descriptors.

    PubMed

    Najafian, Sohrab; Beigzadeh, Borhan; Riahi, Mohammad; Khadir Chamazkoti, Fatemeh; Pouramir, Mahdi

    2017-11-01

    Analysis of glomeruli geometry is important in histopathological evaluation of renal microscopic images. Due to the shape and size disparity of even glomeruli of same kidney, automatic detection of these renal objects is not an easy task. Although manual measurements are time consuming and at times are not very accurate, it is commonly used in medical centers. In this paper, a new method based on Fourier transform following usage of some shape descriptors is proposed to detect these objects and their geometrical parameters. Reaching the goal, a database of 400 regions are selected randomly. 200 regions of which are part of glomeruli and the other 200 regions are not belong to renal corpuscles. ROC curve is used to decide which descriptor could classify two groups better. f_measure, which is a combination of both tpr (true positive rate) and fpr (false positive rate), is also proposed to select optimal threshold for descriptors. Combination of three parameters (solidity, eccentricity, and also mean squared error of fitted ellipse) provided better result in terms of f_measure to distinguish desired regions. Then, Fourier transform of outer edges is calculated to form a complete curve out of separated region(s). The generality of proposed model is verified by use of cross validation method, which resulted tpr of 94%, and fpr of 5%. Calculation of glomerulus' and Bowman's space with use of the algorithm are also compared with a non-automatic measurement done by a renal pathologist, and errors of 5.9%, 5.4%, and 6.26% are resulted in calculation of Capsule area, Bowman space, and glomeruli area, respectively. Having tested different glomeruli with various shapes, the experimental consequences show robustness and reliability of our method. Therefore, it could be used to illustrate renal diseases and glomerular disorders by measuring the morphological changes accurately and expeditiously. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Invisible data matrix detection with smart phone using geometric correction and Hough transform

    NASA Astrophysics Data System (ADS)

    Sun, Halit; Uysalturk, Mahir C.; Karakaya, Mahmut

    2016-04-01

    Two-dimensional data matrices are used in many different areas that provide quick and automatic data entry to the computer system. Their most common usage is to automatically read labeled products (books, medicines, food, etc.) and recognize them. In Turkey, alcohol beverages and tobacco products are labeled and tracked with the invisible data matrices for public safety and tax purposes. In this application, since data matrixes are printed on a special paper with a pigmented ink, it cannot be seen under daylight. When red LEDs are utilized for illumination and reflected light is filtered, invisible data matrices become visible and decoded by special barcode readers. Owing to their physical dimensions, price and requirement of special training to use; cheap, small sized and easily carried domestic mobile invisible data matrix reader systems are required to be delivered to every inspector in the law enforcement units. In this paper, we first developed an apparatus attached to the smartphone including a red LED light and a high pass filter. Then, we promoted an algorithm to process captured images by smartphones and to decode all information stored in the invisible data matrix images. The proposed algorithm mainly involves four stages. In the first step, data matrix code is processed by Hough transform processing to find "L" shaped pattern. In the second step, borders of the data matrix are found by using the convex hull and corner detection methods. Afterwards, distortion of invisible data matrix corrected by geometric correction technique and the size of every module is fixed in rectangular shape. Finally, the invisible data matrix is scanned line by line in the horizontal axis to decode it. Based on the results obtained from the real test images of invisible data matrix captured with a smartphone, the proposed algorithm indicates high accuracy and low error rate.

  18. Discovering biclusters in gene expression data based on high-dimensional linear geometries

    PubMed Central

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2008-01-01

    Background In DNA microarray experiments, discovering groups of genes that share similar transcriptional characteristics is instrumental in functional annotation, tissue classification and motif identification. However, in many situations a subset of genes only exhibits consistent pattern over a subset of conditions. Conventional clustering algorithms that deal with the entire row or column in an expression matrix would therefore fail to detect these useful patterns in the data. Recently, biclustering has been proposed to detect a subset of genes exhibiting consistent pattern over a subset of conditions. However, most existing biclustering algorithms are based on searching for sub-matrices within a data matrix by optimizing certain heuristically defined merit functions. Moreover, most of these algorithms can only detect a restricted set of bicluster patterns. Results In this paper, we present a novel geometric perspective for the biclustering problem. The biclustering process is interpreted as the detection of linear geometries in a high dimensional data space. Such a new perspective views biclusters with different patterns as hyperplanes in a high dimensional space, and allows us to handle different types of linear patterns simultaneously by matching a specific set of linear geometries. This geometric viewpoint also inspires us to propose a generic bicluster pattern, i.e. the linear coherent model that unifies the seemingly incompatible additive and multiplicative bicluster models. As a particular realization of our framework, we have implemented a Hough transform-based hyperplane detection algorithm. The experimental results on human lymphoma gene expression dataset show that our algorithm can find biologically significant subsets of genes. Conclusion We have proposed a novel geometric interpretation of the biclustering problem. We have shown that many common types of bicluster are just different spatial arrangements of hyperplanes in a high dimensional data space. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. PMID:18433477

  19. Innovative tidal notch detection using TLS and fuzzy logic: Implications for palaeo-shorelines from compressional (Crete) and extensional (Gulf of Corinth) tectonic settings

    NASA Astrophysics Data System (ADS)

    Schneiderwind, S.; Boulton, S. J.; Papanikolaou, I.; Reicherter, K.

    2017-04-01

    Tidal notches are a generally accepted sea-level marker and maintain particular interest for palaeoseismic studies since coastal seismic activity potentially displaces them from their genetic position. The result of subsequent seismic events is a notch sequence reflecting the cumulative coastal uplift. In order to evaluate preserved notch sequences, an innovative and interdisciplinary workflow is presented that accurately highlights evidence for palaeo-sea-level markers. The workflow uses data from terrestrial laser scanning and iteratively combines high-resolution curvature analysis, high performance edge detection, and feature extraction. Based on the assumptions that remnants, such as the roof of tidal notches, form convex patterns, edge detection is performed on principal curvature images. In addition, a standard algorithm is compared to edge detection results from a custom Fuzzy logic approach. The results pass through a Hough transform in order to extract continuous line features of an almost horizontal orientation. The workflow was initially developed on a single, distinct, and sheltered exposure in southern Crete and afterwards successfully tested on laser scans of different coastal cliffs from the Perachora Peninsula. This approach allows a detailed examination of otherwise inaccessible locations and the evaluation of lateral and 3D geometries, thus evidence for previously unrecognised sea-level markers can be identified even when poorly developed. High resolution laser scans of entire cliff exposures allow local variations to be quantified. Edge detection aims to reduce information on the surface curvature and Hough transform limits the results towards orientation and continuity. Thus, the presented objective methodology enhances the recognition of tidal notches and supports palaeoseismic studies by contributing spatial information and accurate measurements of horizontal movements, beyond that recognised during traditional surveys. This is especially useful for the identification of palaeo-shorelines in extensional tectonic environments where coseismic footwall uplift (only 1/2 to 1/4 of net slip per event) is unlikely to raise an entire notch above the tidal range.

  20. Global detection of large lunar craters based on the CE-1 digital elevation model

    NASA Astrophysics Data System (ADS)

    Luo, Lei; Mu, Lingli; Wang, Xinyuan; Li, Chao; Ji, Wei; Zhao, Jinjin; Cai, Heng

    2013-12-01

    Craters, one of the most significant features of the lunar surface, have been widely researched because they offer us the relative age of the surface unit as well as crucial geological information. Research on crater detection algorithms (CDAs) of the Moon and other planetary bodies has concentrated on detecting them from imagery data, but the computational cost of detecting large craters using images makes these CDAs impractical. This paper presents a new approach to crater detection that utilizes a digital elevation model instead of images; this enables fully automatic global detection of large craters. Craters were delineated by terrain attributes, and then thresholding maps of terrain attributes were used to transform topographic data into a binary image, finally craters were detected by using the Hough Transform from the binary image. By using the proposed algorithm, we produced a catalog of all craters ⩾10 km in diameter on the lunar surface and analyzed their distribution and population characteristics.

  1. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mărăscu, V.; Dinescu, G.; Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformitymore » of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.« less

  2. Surgical tool detection and tracking in retinal microsurgery

    NASA Astrophysics Data System (ADS)

    Alsheakhali, Mohamed; Yigitsoy, Mehmet; Eslami, Abouzar; Navab, Nassir

    2015-03-01

    Visual tracking of surgical instruments is an essential part of eye surgery, and plays an important role for the surgeons as well as it is a key component of robotics assistance during the operation time. The difficulty of detecting and tracking medical instruments in-vivo images comes from its deformable shape, changes in brightness, and the presence of the instrument shadow. This paper introduces a new approach to detect the tip of surgical tool and its width regardless of its head shape and the presence of the shadows or vessels. The approach relies on integrating structural information about the strong edges from the RGB color model, and the tool location-based information from L*a*b color model. The probabilistic Hough transform was applied to get the strongest straight lines in the RGB-images, and based on information from the L* and a*, one of these candidates lines is selected as the edge of the tool shaft. Based on that line, the tool slope, the tool centerline and the tool tip could be detected. The tracking is performed by keeping track of the last detected tool tip and the tool slope, and filtering the Hough lines within a box around the last detected tool tip based on the slope differences. Experimental results demonstrate the high accuracy achieved in term of detecting the tool tip position, the tool joint point position, and the tool centerline. The approach also meets the real time requirements.

  3. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    NASA Astrophysics Data System (ADS)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  4. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    NASA Astrophysics Data System (ADS)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  5. Image recognition of clipped stigma traces in rice seeds

    NASA Astrophysics Data System (ADS)

    Cheng, F.; Ying, YB

    2005-11-01

    The objective of this research is to develop algorithm to recognize clipped stigma traces in rice seeds using image processing. At first, the micro-configuration of clipped stigma traces was observed with electronic scanning microscope. Then images of rice seeds were acquired with a color machine vision system. A digital image-processing algorithm based on morphological operations and Hough transform was developed to inspect the occurrence of clipped stigma traces. Five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and you3207 were evaluated. The algorithm was implemented with all image sets using a Matlab 6.5 procedure. The results showed that the algorithm achieved an average accuracy of 96%. The algorithm was proved to be insensitive to the different rice seed varieties.

  6. Restoration of high-resolution AFM images captured with broken probes

    NASA Astrophysics Data System (ADS)

    Wang, Y. F.; Corrigan, D.; Forman, C.; Jarvis, S.; Kokaram, A.

    2012-03-01

    A type of artefact is induced by damage of the scanning probe when the Atomic Force Microscope (AFM) captures a material surface structure with nanoscale resolution. This artefact has a dramatic form of distortion rather than the traditional blurring artefacts. Practically, it is not easy to prevent the damage of the scanning probe. However, by using natural image deblurring techniques in image processing domain, a comparatively reliable estimation of the real sample surface structure can be generated. This paper introduces a novel Hough Transform technique as well as a Bayesian deblurring algorithm to remove this type of artefact. The deblurring result is successful at removing blur artefacts in the AFM artefact images. And the details of the fibril surface topography are well preserved.

  7. Power line identification of millimeter wave radar based on PCA-GS-SVM

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Zhang, Guifeng; Cheng, Yansheng

    2017-12-01

    Aiming at the problem that the existing detection method can not effectively solve the security of UAV's ultra low altitude flight caused by power line, a power line recognition method based on grid search (GS) and the principal component analysis and support vector machine (PCA-SVM) is proposed. Firstly, the candidate line of Hough transform is reduced by PCA, and the main feature of candidate line is extracted. Then, upport vector machine (SVM is) optimized by grid search method (GS). Finally, using support vector machine classifier optimized parameters to classify the candidate line. MATLAB simulation results show that this method can effectively identify the power line and noise, and has high recognition accuracy and algorithm efficiency.

  8. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  9. Page layout analysis and classification for complex scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2011-09-01

    A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.

  10. A vision-based automated guided vehicle system with marker recognition for indoor use.

    PubMed

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  11. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  12. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  13. The Impact of the Implementation of Edge Detection Methods on the Accuracy of Automatic Voltage Reading

    NASA Astrophysics Data System (ADS)

    Sidor, Kamil; Szlachta, Anna

    2017-04-01

    The article presents the impact of the edge detection method in the image analysis on the reading accuracy of the measured value. In order to ensure the automatic reading of the measured value by an analog meter, a standard webcam and the LabVIEW programme were applied. NI Vision Development tools were used. The Hough transform was used to detect the indicator. The programme output was compared during the application of several methods of edge detection. Those included: the Prewitt operator, the Roberts cross, the Sobel operator and the Canny edge detector. The image analysis was made for an analog meter indicator with the above-mentioned methods, and the results of that analysis were compared with each other and presented.

  14. Offset Printing Plate Quality Sensor on a Low-Cost Processor

    PubMed Central

    Poljak, Jelena; Botella, Guillermo; García, Carlos; Poljaček, Sanja Mahović; Prieto-Matías, Manuel; Tirado, Francisco

    2013-01-01

    The aim of this work is to develop a microprocessor-based sensor that measures the quality of the offset printing plate through the introduction of different image analysis applications. The main features of the presented system are the low cost, the low amount of power consumption, its modularity and easy integration with other industrial modules for printing plates, and its robustness against noise environments. For the sake of clarity, a viability analysis of previous software is presented through different strategies, based on dynamic histogram and Hough transform. This paper provides performance and scalability data compared with existing costly commercial devices. Furthermore, a general overview of quality control possibilities for printing plates is presented and could be useful to a system where such controls are regularly conducted. PMID:24284766

  15. Joint 3-D vessel segmentation and centerline extraction using oblique Hough forests with steerable filters.

    PubMed

    Schneider, Matthias; Hirsch, Sven; Weber, Bruno; Székely, Gábor; Menze, Bjoern H

    2015-01-01

    We propose a novel framework for joint 3-D vessel segmentation and centerline extraction. The approach is based on multivariate Hough voting and oblique random forests (RFs) that we learn from noisy annotations. It relies on steerable filters for the efficient computation of local image features at different scales and orientations. We validate both the segmentation performance and the centerline accuracy of our approach both on synthetic vascular data and four 3-D imaging datasets of the rat visual cortex at 700 nm resolution. First, we evaluate the most important structural components of our approach: (1) Orthogonal subspace filtering in comparison to steerable filters that show, qualitatively, similarities to the eigenspace filters learned from local image patches. (2) Standard RF against oblique RF. Second, we compare the overall approach to different state-of-the-art methods for (1) vessel segmentation based on optimally oriented flux (OOF) and the eigenstructure of the Hessian, and (2) centerline extraction based on homotopic skeletonization and geodesic path tracing. Our experiments reveal the benefit of steerable over eigenspace filters as well as the advantage of oblique split directions over univariate orthogonal splits. We further show that the learning-based approach outperforms different state-of-the-art methods and proves highly accurate and robust with regard to both vessel segmentation and centerline extraction in spite of the high level of label noise in the training data. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Automated infrasound signal detection algorithms implemented in MatSeis - Infra Tool.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Darren

    2004-07-01

    MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefrontsmore » at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and the detection window closely correlated to the theoretical stratospheric arrival time. Further testing will be required for tuning of detection threshold parameters for different types of infrasound events.« less

  17. Implementation of an algorithm for cylindrical object identification using range data

    NASA Technical Reports Server (NTRS)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  18. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images

    PubMed Central

    Jang, Yeonggul; Jung, Ho Yub; Hong, Youngtaek; Cho, Iksung; Shim, Hackjoon; Chang, Hyuk-Jae

    2016-01-01

    This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients' CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. PMID:26904151

  19. Autonomous navigation method for substation inspection robot based on travelling deviation

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting

    2017-06-01

    A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.

  20. Extraction and Classification of Human Gait Features

    NASA Astrophysics Data System (ADS)

    Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi

    In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.

  1. Cherry recognition in natural environment based on the vision of picking robot

    NASA Astrophysics Data System (ADS)

    Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan

    2017-04-01

    In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.

  2. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.

  3. Lane Detection on the iPhone

    NASA Astrophysics Data System (ADS)

    Ren, Feixiang; Huang, Jinsheng; Terauchi, Mutsuhiro; Jiang, Ruyi; Klette, Reinhard

    A robust and efficient lane detection system is an essential component of Lane Departure Warning Systems, which are commonly used in many vision-based Driver Assistance Systems (DAS) in intelligent transportation. Various computation platforms have been proposed in the past few years for the implementation of driver assistance systems (e.g., PC, laptop, integrated chips, PlayStation, and so on). In this paper, we propose a new platform for the implementation of lane detection, which is based on a mobile phone (the iPhone). Due to physical limitations of the iPhone w.r.t. memory and computing power, a simple and efficient lane detection algorithm using a Hough transform is developed and implemented on the iPhone, as existing algorithms developed based on the PC platform are not suitable for mobile phone devices (currently). Experiments of the lane detection algorithm are made both on PC and on iPhone.

  4. Time frequency analysis for automated sleep stage identification in fullterm and preterm neonates.

    PubMed

    Fraiwan, Luay; Lweesy, Khaldon; Khasawneh, Natheer; Fraiwan, Mohammad; Wenz, Heinrich; Dickhaus, Hartmut

    2011-08-01

    This work presents a new methodology for automated sleep stage identification in neonates based on the time frequency distribution of single electroencephalogram (EEG) recording and artificial neural networks (ANN). Wigner-Ville distribution (WVD), Hilbert-Hough spectrum (HHS) and continuous wavelet transform (CWT) time frequency distributions were used to represent the EEG signal from which features were extracted using time frequency entropy. The classification of features was done using feed forward back-propagation ANN. The system was trained and tested using data taken from neonates of post-conceptual age of 40 weeks for both preterm (14 recordings) and fullterm (15 recordings). The identification of sleep stages was successfully implemented and the classification based on the WVD outperformed the approaches based on CWT and HHS. The accuracy and kappa coefficient were found to be 0.84 and 0.65 respectively for the fullterm neonates' recordings and 0.74 and 0.50 respectively for preterm neonates' recordings.

  5. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-19

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  6. Extracting cardiac myofiber orientations from high frequency ultrasound images

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Jiang, Rong; Shen, Ming; Wagner, Mary B.; Kirshbom, Paul; Fei, Baowei

    2013-03-01

    Cardiac myofiber plays an important role in stress mechanism during heart beating periods. The orientation of myofibers decides the effects of the stress distribution and the whole heart deformation. It is important to image and quantitatively extract these orientations for understanding the cardiac physiological and pathological mechanism and for diagnosis of chronic diseases. Ultrasound has been wildly used in cardiac diagnosis because of its ability of performing dynamic and noninvasive imaging and because of its low cost. An extraction method is proposed to automatically detect the cardiac myofiber orientations from high frequency ultrasound images. First, heart walls containing myofibers are imaged by B-mode high frequency (<20 MHz) ultrasound imaging. Second, myofiber orientations are extracted from ultrasound images using the proposed method that combines a nonlinear anisotropic diffusion filter, Canny edge detector, Hough transform, and K-means clustering. This method is validated by the results of ultrasound data from phantoms and pig hearts.

  7. Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models

    PubMed Central

    Maji, Suvrajit; Bruchez, Marcel P.

    2012-01-01

    Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348

  8. Understanding deformation with high angular resolution electron backscatter diffraction (HR-EBSD)

    NASA Astrophysics Data System (ADS)

    Britton, T. B.; Hickey, J. L. R.

    2018-01-01

    High angular resolution electron backscatter diffraction (HR-EBSD) affords an increase in angular resolution, as compared to ‘conventional’ Hough transform based EBSD, of two orders of magnitude, enabling measurements of relative misorientations of 1 x 10-4 rads (~ 0.006°) and changes in (deviatoric) lattice strain with a precision of 1 x 10-4. This is achieved through direct comparison of two or more diffraction patterns using sophisticated cross-correlation based image analysis routines. Image shifts between zone axes in the two-correlated diffraction pattern are measured with sub-pixel precision and this realises the ability to measure changes in interplanar angles and lattice orientation with a high degree of sensitivity. These shifts are linked to strains and lattice rotations through simple geometry. In this manuscript, we outline the basis of the technique and two case studies that highlight its potential to tackle real materials science challenges, such as deformation patterning in polycrystalline alloys.

  9. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  10. Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images

    PubMed Central

    Pu, Shi; Vosselman, George

    2009-01-01

    Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539

  11. Track vertex reconstruction with neural networks at the first level trigger of Belle II

    NASA Astrophysics Data System (ADS)

    Neuhaus, Sara; Skambraks, Sebastian; Kiesling, Christian

    2017-08-01

    The track trigger is one of the main components of the Belle II first level trigger, taking input from the Central Drift Chamber (CDC). It consists of several stages, first combining hits to track segments, followed by a 2D track finding in the transverse plane and finally a 3D track reconstruction. The results of the track trigger are the track multiplicity, the momentum vector of each track and the longitudinal displacement of the origin or production vertex of each track ("z-vertex"). The latter allows to reject background tracks from outside of the interaction region and thus to suppress a large fraction of the machine background. This contribution focuses on the track finding stage using Hough transforms and on the z-vertex reconstruction with neural networks. We describe the algorithms and show performance studies on simulated events.

  12. Robust vehicle detection in different weather conditions: Using MIPM

    PubMed Central

    Menéndez, José Manuel; Jiménez, David

    2018-01-01

    Intelligent Transportation Systems (ITS) allow us to have high quality traffic information to reduce the risk of potentially critical situations. Conventional image-based traffic detection methods have difficulties acquiring good images due to perspective and background noise, poor lighting and weather conditions. In this paper, we propose a new method to accurately segment and track vehicles. After removing perspective using Modified Inverse Perspective Mapping (MIPM), Hough transform is applied to extract road lines and lanes. Then, Gaussian Mixture Models (GMM) are used to segment moving objects and to tackle car shadow effects, we apply a chromacity-based strategy. Finally, performance is evaluated through three different video benchmarks: own recorded videos in Madrid and Tehran (with different weather conditions at urban and interurban areas); and two well-known public datasets (KITTI and DETRAC). Our results indicate that the proposed algorithms are robust, and more accurate compared to others, especially when facing occlusions, lighting variations and weather conditions. PMID:29513664

  13. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  14. Investigation of the detection of shallow tunnels using electromagnetic and seismic waves

    NASA Astrophysics Data System (ADS)

    Counts, Tegan; Larson, Gregg; Gürbüz, Ali Cafer; McClellan, James H.; Scott, Waymond R., Jr.

    2007-04-01

    Multimodal detection of subsurface targets such as tunnels, pipes, reinforcement bars, and structures has been investigated using both ground-penetrating radar (GPR) and seismic sensors with signal processing techniques to enhance localization capabilities. Both systems have been tested in bi-static configurations but the GPR has been expanded to a multi-static configuration for improved performance. The use of two compatible sensors that sense different phenomena (GPR detects changes in electrical properties while the seismic system measures mechanical properties) increases the overall system's effectiveness in a wider range of soils and conditions. Two experimental scenarios have been investigated in a laboratory model with nearly homogeneous sand. Images formed from the raw data have been enhanced using beamforming inversion techniques and Hough Transform techniques to specifically address the detection of linear targets. The processed data clearly indicate the locations of the buried targets of various sizes at a range of depths.

  15. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  16. Improved parameter extraction and classification for dynamic contrast enhanced MRI of prostate

    NASA Astrophysics Data System (ADS)

    Haq, Nandinee Fariah; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi

    2014-03-01

    Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and prognosis. The time course of the DCE images provides measures of the contrast agent uptake kinetics. Also, using pharmacokinetic modelling, one can extract parameters from the DCE-MR images that characterize the tumor vascularization and can be used to detect cancer. A requirement for calculating the pharmacokinetic DCE parameters is estimating the Arterial Input Function (AIF). One needs an accurate segmentation of the cross section of the external femoral artery to obtain the AIF. In this work we report a semi-automatic method for segmentation of the cross section of the femoral artery, using circular Hough transform, in the sequence of DCE images. We also report a machine-learning framework to combine pharmacokinetic parameters with the model-free contrast agent uptake kinetic parameters extracted from the DCE time course into a nine-dimensional feature vector. This combination of features is used with random forest and with support vector machine classi cation for cancer detection. The MR data is obtained from patients prior to radical prostatectomy. After the surgery, wholemount histopathology analysis is performed and registered to the DCE-MR images as the diagnostic reference. We show that the use of a combination of pharmacokinetic parameters and the model-free empirical parameters extracted from the time course of DCE results in improved cancer detection compared to the use of each group of features separately. We also validate the proposed method for calculation of AIF based on comparison with the manual method.

  17. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Gao, Junfeng; Liao, Wenzhi; Nuyttens, David; Lootens, Peter; Vangeyte, Jürgen; Pižurica, Aleksandra; He, Yong; Pieters, Jan G.

    2018-05-01

    The developments in the use of unmanned aerial vehicles (UAVs) and advanced imaging sensors provide new opportunities for ultra-high resolution (e.g., less than a 10 cm ground sampling distance (GSD)) crop field monitoring and mapping in precision agriculture applications. In this study, we developed a strategy for inter- and intra-row weed detection in early season maize fields from aerial visual imagery. More specifically, the Hough transform algorithm (HT) was applied to the orthomosaicked images for inter-row weed detection. A semi-automatic Object-Based Image Analysis (OBIA) procedure was developed with Random Forests (RF) combined with feature selection techniques to classify soil, weeds and maize. Furthermore, the two binary weed masks generated from HT and OBIA were fused for accurate binary weed image. The developed RF classifier was evaluated by 5-fold cross validation, and it obtained an overall accuracy of 0.945, and Kappa value of 0.912. Finally, the relationship of detected weeds and their ground truth densities was quantified by a fitted linear model with a coefficient of determination of 0.895 and a root mean square error of 0.026. Besides, the importance of input features was evaluated, and it was found that the ratio of vegetation length and width was the most significant feature for the classification model. Overall, our approach can yield a satisfactory weed map, and we expect that the obtained accurate and timely weed map from UAV imagery will be applicable to realize site-specific weed management (SSWM) in early season crop fields for reducing spraying non-selective herbicides and costs.

  18. Tuning Fractures With Dynamic Data

    NASA Astrophysics Data System (ADS)

    Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao

    2018-02-01

    Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.

  19. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction

    PubMed Central

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636

  20. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.

    PubMed

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.

  1. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    PubMed Central

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  2. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  3. Detection of the nipple in automated 3D breast ultrasound using coronal slab-average-projection and cumulative probability map

    NASA Astrophysics Data System (ADS)

    Kim, Hannah; Hong, Helen

    2014-03-01

    We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.

  4. Visual texture for automated characterisation of geological features in borehole televiewer imagery

    NASA Astrophysics Data System (ADS)

    Al-Sit, Waleed; Al-Nuaimy, Waleed; Marelli, Matteo; Al-Ataby, Ali

    2015-08-01

    Detailed characterisation of the structure of subsurface fractures is greatly facilitated by digital borehole logging instruments, the interpretation of which is typically time-consuming and labour-intensive. Despite recent advances towards autonomy and automation, the final interpretation remains heavily dependent on the skill, experience, alertness and consistency of a human operator. Existing computational tools fail to detect layers between rocks that do not exhibit distinct fracture boundaries, and often struggle characterising cross-cutting layers and partial fractures. This paper presents a novel approach to the characterisation of planar rock discontinuities from digital images of borehole logs. Multi-resolution texture segmentation and pattern recognition techniques utilising Gabor filters are combined with an iterative adaptation of the Hough transform to enable non-distinct, partial, distorted and steep fractures and layers to be accurately identified and characterised in a fully automated fashion. This approach has successfully detected fractures and layers with high detection accuracy and at a relatively low computational cost.

  5. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  6. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  7. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  8. Remote Sensing of Mars: Detection of Impact Craters on the Mars Global Surveyor DTM by Integrating Edge- and Region-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Athanassas, C. D.; Vaiopoulos, A.; Kolokoussis, P.; Argialas, D.

    2018-03-01

    This study integrates two different computer vision approaches, namely the circular Hough transform (CHT) and the determinant of Hessian (DoH), to detect automatically the largest number possible of craters of any size on the digital terrain model (DTM) generated by the Mars Global Surveyor mission. Specifically, application of the standard version of CHT to the DTM captured a great number of craters with diameter smaller than 50 km only, failing to capture larger craters. On the other hand, DoH was successful in detecting craters that were undetected by CHT, but its performance was deterred by the irregularity of the topographic surface encompassed: strongly undulated and inclined (trended) topographies hindered crater detection. When run on a de-trended DTM (and keeping the topology unaltered) DoH scored higher. Current results, although not optimal, encourage combined use of CHT and DoH for routine crater detection undertakings.

  9. Shape from texture: an evaluation of visual cues

    NASA Astrophysics Data System (ADS)

    Mueller, Wolfgang; Hildebrand, Axel

    1994-05-01

    In this paper an integrated approach is presented to understand and control the influence of texture on shape perception. Following Gibson's hypotheses, which states that texture is a mathematically and psychological sufficient stimulus for surface perception, we evaluate different perceptual cues. Starting out from a perception-based texture classification introduced by Tamura et al., we build up a uniform sampled parameter space. For the synthesis of some of our textures we use the texture description language HiLDTe. To acquire the desired texture specification we take advantage of a genetic algorithm. Employing these textures we practice a number of psychological tests to evaluate the significance of the different texture features. A comprehension of the results derived from the psychological tests is done to constitute new shape analyzing techniques. Since the vanishing point seems to be an important visual cue we introduce the Hough transform. A prospective of future work within the field of visual computing is provided within the final section.

  10. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  11. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  12. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  13. Automatic road sign detecion and classification based on support vector machines and HOG descriptos

    NASA Astrophysics Data System (ADS)

    Adam, A.; Ioannidis, C.

    2014-05-01

    This paper examines the detection and classification of road signs in color-images acquired by a low cost camera mounted on a moving vehicle. A new method for the detection and classification of road signs is proposed based on color based detection, in order to locate regions of interest. Then, a circular Hough transform is applied to complete detection taking advantage of the shape properties of the road signs. The regions of interest are finally represented using HOG descriptors and are fed into trained Support Vector Machines (SVMs) in order to be recognized. For the training procedure, a database with several training examples depicting Greek road sings has been developed. Many experiments have been conducted and are presented, to measure the efficiency of the proposed methodology especially under adverse weather conditions and poor illumination. For the experiments training datasets consisting of different number of examples were used and the results are presented, along with some possible extensions of this work.

  14. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  15. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  16. Towards photometry pipeline of the Indonesian space surveillance system

    NASA Astrophysics Data System (ADS)

    Priyatikanto, Rhorom; Religia, Bahar; Rachman, Abdul; Dani, Tiar

    2015-09-01

    Optical observation through sub-meter telescope equipped with CCD camera becomes alternative method for increasing orbital debris detection and surveillance. This observational mode is expected to eye medium-sized objects in higher orbits (e.g. MEO, GTO, GSO & GEO), beyond the reach of usual radar system. However, such observation of fast moving objects demands special treatment and analysis technique. In this study, we performed photometric analysis of the satellite track images photographed using rehabilitated Schmidt Bima Sakti telescope in Bosscha Observatory. The Hough transformation was implemented to automatically detect linear streak from the images. From this analysis and comparison to USSPACECOM catalog, two satellites were identified and associated with inactive Thuraya-3 satellite and Satcom-3 debris which are located at geostationary orbit. Further aperture photometry analysis revealed the periodicity of tumbling Satcom-3 debris. In the near future, it is not impossible to apply similar scheme to establish an analysis pipeline for optical space surveillance system hosted in Indonesia.

  17. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  18. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    PubMed

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  19. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution

    PubMed Central

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170

  20. Application of diffusion barriers to the refractory fibers of tungsten, columbium, carbon and aluminum oxide

    NASA Technical Reports Server (NTRS)

    Douglas, F. C.; Paradis, E. L.; Veltri, R. D.

    1973-01-01

    A radio frequency powered ion-plating system was used to plate protective layers of refractory oxides and carbide onto high strength fiber substrates. Subsequent overplating of these combinations with nickel and titanium was made to determine the effectiveness of such barrier layers in preventing diffusion of the overcoat metal into the fibers with consequent loss of fiber strength. Four substrates, five coatings, and two metal matrix materials were employed for a total of forty material combinations. The substrates were tungsten, niobium, NASA-Hough carbon, and Tyco sapphire. The diffusion-barrier coatings were aluminum oxide, yttrium oxide, titanium carbide, tungsten carbide with 14% cobalt addition, and zirconium carbide. The metal matrix materials were IN 600 nickel and Ti 6/4 titanium. Adhesion of the coatings to all substrates was good except for the NASA-Hough carbon, where flaking off of the oxide coatings in particular was observed.

  1. Photoacoustic image reconstruction: a quantitative analysis

    NASA Astrophysics Data System (ADS)

    Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.

    2007-07-01

    Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.

  2. Fully automatic detection and visualization of patient specific coronary supply regions

    NASA Astrophysics Data System (ADS)

    Fritz, Dominik; Wiedemann, Alexander; Dillmann, Ruediger; Scheuering, Michael

    2008-03-01

    Coronary territory maps, which associate myocardial regions with the corresponding coronary artery that supply them, are a common visualization technique to assist the physician in the diagnosis of coronary artery disease. However, the commonly used visualization is based on the AHA-17-segment model, which is an empirical population based model. Therefore, it does not necessarily cope with the often highly individual coronary anatomy of a specific patient. In this paper we introduce a novel fully automatic approach to compute the patient individual coronary supply regions in CTA datasets. This approach is divided in three consecutive steps. First, the aorta is fully automatically located in the dataset with a combination of a Hough transform and a cylindrical model matching approach. Having the location of the aorta, a segmentation and skeletonization of the coronary tree is triggered. In the next step, the three main branches (LAD, LCX and RCX) are automatically labeled, based on the knowledge of the pose of the aorta and the left ventricle. In the last step the labeled coronary tree is projected on the left ventricular surface, which can afterward be subdivided into the coronary supply regions, based on a Voronoi transform. The resulting supply regions can be either shown in 3D on the epicardiac surface of the left ventricle, or as a subdivision of a polarmap.

  3. Automated visual inspection of brake shoe wear

    NASA Astrophysics Data System (ADS)

    Lu, Shengfang; Liu, Zhen; Nan, Guo; Zhang, Guangjun

    2015-10-01

    With the rapid development of high-speed railway, the automated fault inspection is necessary to ensure train's operation safety. Visual technology is paid more attention in trouble detection and maintenance. For a linear CCD camera, Image alignment is the first step in fault detection. To increase the speed of image processing, an improved scale invariant feature transform (SIFT) method is presented. The image is divided into multiple levels of different resolution. Then, we do not stop to extract the feature from the lowest resolution to the highest level until we get sufficient SIFT key points. At that level, the image is registered and aligned quickly. In the stage of inspection, we devote our efforts to finding the trouble of brake shoe, which is one of the key components in brake system on electrical multiple units train (EMU). Its pre-warning on wear limitation is very important in fault detection. In this paper, we propose an automatic inspection approach to detect the fault of brake shoe. Firstly, we use multi-resolution pyramid template matching technology to fast locate the brake shoe. Then, we employ Hough transform to detect the circles of bolts in brake region. Due to the rigid characteristic of structure, we can identify whether the brake shoe has a fault. The experiments demonstrate that the way we propose has a good performance, and can meet the need of practical applications.

  4. New Journalism.

    ERIC Educational Resources Information Center

    Fishwick, Marshall, Ed.

    This volume contains a selection of articles which examine, critique, and help to define the phenomenon of new journalism. Included are "Popular Culture and the New Journalism" (Marshall Fishwick), "Entrance" (Richard A. Kallan), "How 'New'?" (George A. Hough III), "Journalistic Primitivism" (Everette E. Dennis), "Wherein Lies the Value?" (Michael…

  5. Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors

    PubMed Central

    Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal

    2014-01-01

    Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782

  6. Image classification of unlabeled malaria parasites in red blood cells.

    PubMed

    Zheng Zhang; Ong, L L Sharon; Kong Fang; Matthew, Athul; Dauwels, Justin; Ming Dao; Asada, Harry

    2016-08-01

    This paper presents a method to detect unlabeled malaria parasites in red blood cells. The current "gold standard" for malaria diagnosis is microscopic examination of thick blood smear, a time consuming process requiring extensive training. Our goal is to develop an automate process to identify malaria infected red blood cells. Major issues in automated analysis of microscopy images of unstained blood smears include overlapping cells and oddly shaped cells. Our approach creates robust templates to detect infected and uninfected red cells. Histogram of Oriented Gradients (HOGs) features are extracted from templates and used to train a classifier offline. Next, the ViolaJones object detection framework is applied to detect infected and uninfected red cells and the image background. Results show our approach out-performs classification approaches with PCA features by 50% and cell detection algorithms applying Hough transforms by 24%. Majority of related work are designed to automatically detect stained parasites in blood smears where the cells are fixed. Although it is more challenging to design algorithms for unstained parasites, our methods will allow analysis of parasite progression in live cells under different drug treatments.

  7. Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.

    PubMed

    Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu

    2017-05-23

    This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.

  8. Using a Smartphone Camera for Nanosatellite Attitude Determination

    NASA Astrophysics Data System (ADS)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  9. Event reconstruction for the CBM-RICH prototype beamtest data in 2014

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, J.; Akishin, P.; Becker, K.-H.; Belogurov, S.; Bendarouach, J.; Boldyreva, N.; Deveaux, C.; Dobyrn, V.; Dürr, M.; Eschke, J.; Förtsch, J.; Heep, J.; Höhne, C.; Kampert, K.-H.; Kochenda, L.; Kopfer, J.; Kravtsov, P.; Kres, I.; Lebedev, S.; Lebedeva, E.; Leonova, E.; Linev, S.; Mahmoud, T.; Michel, J.; Miftakhov, N.; Niebur, W.; Ovcharenko, E.; Patel, V.; Pauly, C.; Pfeifer, D.; Querchfeld, S.; Rautenberg, J.; Reinecke, S.; Riabov, Y.; Roshchin, E.; Samsonov, V.; Schetinin, V.; Tarasenkova, O.; Traxler, M.; Ugur, C.; Vznuzdaev, E.; Vznuzdaev, M.

    2017-12-01

    The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility will investigate the QCD phase diagram at high net baryon densities and moderate temperatures in A+A collisions from 2 to 11 AGeV (SIS100). Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transition Radiation Detectors (TRD). A real size prototype of the RICH detector was tested together with other CBM groups at the CERN PS/T9 beam line in 2014. For the first time the data format used the FLESnet protocol from CBM delivering free streaming data. The analysis was fully performed within the CBMROOT framework. In this contribution the data analysis and the event reconstruction methods which were used for obtained data are discussed. Rings were reconstructed using an algorithm based on the Hough Transform method and their parameters were derived with high accuracy by circle and ellipse fitting procedures. We present results of the application of the presented algorithms. In particular we compare results with and without Wavelength shifting (WLS) coating.

  10. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  11. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  12. Einstein@Home all-sky search for periodic gravitational waves in LIGO S5 data

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heitmann, H.; Hello, P.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; Anderson, D. P.

    2013-02-01

    This paper presents results of an all-sky search for periodic gravitational waves in the frequency range [50,1190]Hz and with frequency derivative range of ˜[-20,1.1]×10-10Hzs-1 for the fifth LIGO science run (S5). The search uses a noncoherent Hough-transform method to combine the information from coherent searches on time scales of about one day. Because these searches are very computationally intensive, they have been carried out with the Einstein@Home volunteer distributed computing project. Postprocessing identifies eight candidate signals; deeper follow-up studies rule them out. Hence, since no gravitational wave signals have been found, we report upper limits on the intrinsic gravitational wave strain amplitude h0. For example, in the 0.5 Hz-wide band at 152.5 Hz, we can exclude the presence of signals with h0 greater than 7.6×10-25 at a 90% confidence level. This search is about a factor 3 more sensitive than the previous Einstein@Home search of early S5 LIGO data.

  13. Non-intrusive practitioner pupil detection for unmodified microscope oculars.

    PubMed

    Fuhl, Wolfgang; Santini, Thiago; Reichert, Carsten; Claus, Daniel; Herkommer, Alois; Bahmani, Hamed; Rifai, Katharina; Wahl, Siegfried; Kasneci, Enkelejda

    2016-12-01

    Modern microsurgery is a long and complex task requiring the surgeon to handle multiple microscope controls while performing the surgery. Eye tracking provides an additional means of interaction for the surgeon that could be used to alleviate this situation, diminishing surgeon fatigue and surgery time, thus decreasing risks of infection and human error. In this paper, we introduce a novel algorithm for pupil detection tailored for eye images acquired through an unmodified microscope ocular. The proposed approach, the Hough transform, and six state-of-the-art pupil detection algorithms were evaluated on over 4000 hand-labeled images acquired from a digital operating microscope with a non-intrusive monitoring system for the surgeon eyes integrated. Our results show that the proposed method reaches detection rates up to 71% for an error of ≈3% w.r.t the input image diagonal; none of the state-of-the-art pupil detection algorithms performed satisfactorily. The algorithm and hand-labeled data set can be downloaded at:: www.ti.uni-tuebingen.de/perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Segmentation of optic disc and optic cup in retinal fundus images using shape regression.

    PubMed

    Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil

    2016-08-01

    Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.

  15. Crop Row Detection in Maize Fields Inspired on the Human Visual Perception

    PubMed Central

    Romeo, J.; Pajares, G.; Montalvo, M.; Guerrero, J. M.; Guijarro, M.; Ribeiro, A.

    2012-01-01

    This paper proposes a new method, oriented to image real-time processing, for identifying crop rows in maize fields in the images. The vision system is designed to be installed onboard a mobile agricultural vehicle, that is, submitted to gyros, vibrations, and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of two main processes: image segmentation and crop row detection. The first one applies a threshold to separate green plants or pixels (crops and weeds) from the rest (soil, stones, and others). It is based on a fuzzy clustering process, which allows obtaining the threshold to be applied during the normal operation process. The crop row detection applies a method based on image perspective projection that searches for maximum accumulation of segmented green pixels along straight alignments. They determine the expected crop lines in the images. The method is robust enough to work under the above-mentioned undesired effects. It is favorably compared against the well-tested Hough transformation for line detection. PMID:22623899

  16. Pattern recognition applied to infrared images for early alerts in fog

    NASA Astrophysics Data System (ADS)

    Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien

    2014-09-01

    Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.

  17. Multi-Patches IRIS Based Person Authentication System Using Particle Swarm Optimization and Fuzzy C-Means Clustering

    NASA Astrophysics Data System (ADS)

    Shekar, B. H.; Bhat, S. S.

    2017-05-01

    Locating the boundary parameters of pupil and iris and segmenting the noise free iris portion are the most challenging phases of an automated iris recognition system. In this paper, we have presented person authentication frame work which uses particle swarm optimization (PSO) to locate iris region and circular hough transform (CHT) to device the boundary parameters. To undermine the effect of the noise presented in the segmented iris region we have divided the candidate region into N patches and used Fuzzy c-means clustering (FCM) to classify the patches into best iris region and not so best iris region (noisy region) based on the probability density function of each patch. Weighted mean Hammimng distance is adopted to find the dissimilarity score between the two candidate irises. We have used Log-Gabor, Riesz and Taylor's series expansion (TSE) filters and combinations of these three for iris feature extraction. To justify the feasibility of the proposed method, we experimented on the three publicly available data sets IITD, MMU v-2 and CASIA v-4 distance.

  18. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques.

    PubMed

    Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego

    2010-11-01

    Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.

  19. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  20. A space-based climatology of diurnal MLT tidal winds, temperatures and densities from UARS wind measurements

    NASA Astrophysics Data System (ADS)

    Svoboda, Aaron A.; Forbes, Jeffrey M.; Miyahara, Saburo

    2005-11-01

    A self-consistent global tidal climatology, useful for comparing and interpreting radar observations from different locations around the globe, is created from space-based Upper Atmosphere Research Satellite (UARS) horizontal wind measurements. The climatology created includes tidal structures for horizontal winds, temperature and relative density, and is constructed by fitting local (in latitude and height) UARS wind data at 95 km to a set of basis functions called Hough mode extensions (HMEs). These basis functions are numerically computed modifications to Hough modes and are globally self-consistent in wind, temperature, and density. We first demonstrate this self-consistency with a proxy data set from the Kyushu University General Circulation Model, and then use a linear weighted superposition of the HMEs obtained from monthly fits to the UARS data to extrapolate the global, multi-variable tidal structure. A brief explanation of the HMEs’ origin is provided as well as information about a public website that has been set up to make the full extrapolated data sets available.

  1. Attention in western Nevada: Preliminary results from earthquake and explosion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hough, S.E.; Anderson, J.G.; Patton, H.J.

    1989-02-01

    We present preliminary results from a study of the attenuation of regional seismic waves at frequencies between 1 and 15 Hz and distances up to 250 km in Western Nevada. Following the methods of Anderson and Hough (1984) and Hough et al. (1988), we parameterize the asymptote of the high frequency acceleration spectrum by the two-parameter model. We relate the model parameters to a two-layer model for Q/sub i/ and Q/sub d/, the freuqency-independent and the frequency dependent components of the quality factor. We compare our results to previously published Q studies in the Basin and Range and find thatmore » our estimate of total Q, Q/sub t/, in the shallow crust is consistent with shear wave Q at close distances with previous estimates of coda Q (Singh and Hermann, 1983) and LgQ (Chavez and Priestley, 1986), suggesting that both coda Q and LgQ are insensitive to near-surface contributions to attenuation.« less

  2. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  3. Surface registration technique for close-range mapping applications

    NASA Astrophysics Data System (ADS)

    Habib, Ayman F.; Cheng, Rita W. T.

    2006-08-01

    Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.

  4. Mobile-based text recognition from water quality devices

    NASA Astrophysics Data System (ADS)

    Dhakal, Shanti; Rahnemoonfar, Maryam

    2015-03-01

    Measuring water quality of bays, estuaries, and gulfs is a complicated and time-consuming process. YSI Sonde is an instrument used to measure water quality parameters such as pH, temperature, salinity, and dissolved oxygen. This instrument is taken to water bodies in a boat trip and researchers note down different parameters displayed by the instrument's display monitor. In this project, a mobile application is developed for Android platform that allows a user to take a picture of the YSI Sonde monitor, extract text from the image and store it in a file on the phone. The image captured by the application is first processed to remove perspective distortion. Probabilistic Hough line transform is used to identify lines in the image and the corner of the image is then obtained by determining the intersection of the detected horizontal and vertical lines. The image is warped using the perspective transformation matrix, obtained from the corner points of the source image and the destination image, hence, removing the perspective distortion. Mathematical morphology operation, black-hat is used to correct the shading of the image. The image is binarized using Otsu's binarization technique and is then passed to the Optical Character Recognition (OCR) software for character recognition. The extracted information is stored in a file on the phone and can be retrieved later for analysis. The algorithm was tested on 60 different images of YSI Sonde with different perspective features and shading. Experimental results, in comparison to ground-truth results, demonstrate the effectiveness of the proposed method.

  5. Color image encryption based on gyrator transform and Arnold transform

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.

  6. Intelligent form removal with character stroke preservation

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.

    1996-03-01

    A new technique for intelligent form removal has been developed along with a new method for evaluating its impact on optical character recognition (OCR). All the dominant lines in the image are automatically detected using the Hough line transform and intelligently erased while simultaneously preserving overlapping character strokes by computing line width statistics and keying off of certain visual cues. This new method of form removal operates on loosely defined zones with no image deskewing. Any field in which the writer is provided a horizontal line to enter a response can be processed by this method. Several examples of processed fields are provided, including a comparison of results between the new method and a commercially available forms removal package. Even if this new form removal method did not improve character recognition accuracy, it is still a significant improvement to the technology because the requirement of a priori knowledge of the form's geometric details has been greatly reduced. This relaxes the recognition system's dependence on rigid form design, printing, and reproduction by automatically detecting and removing some of the physical structures (lines) on the form. Using the National Institute of Standards and Technology (NIST) public domain form-based handprint recognition system, the technique was tested on a large number of fields containing randomly ordered handprinted lowercase alphabets, as these letters (especially those with descenders) frequently touch and extend through the line along which they are written. Preserving character strokes improves overall lowercase recognition performance by 3%, which is a net improvement, but a single performance number like this doesn't communicate how the recognition process was really influenced. There is expected to be trade- offs with the introduction of any new technique into a complex recognition system. To understand both the improvements and the trade-offs, a new analysis was designed to compare the statistical distributions of individual confusion pairs between two systems. As OCR technology continues to improve, sophisticated analyses like this are necessary to reduce the errors remaining in complex recognition problems.

  7. RESTORING NATURE IN THE CITY: PUGET SOUND EXPERIENCES. (R825284)

    EPA Science Inventory

    Restoring nature within American urban areas seems basic to sustainability both in theory (Hough, 1995) and in practice (Sustainable Seattle, 1993). In addition to applicable science, restoration of urban green areas requires two com...

  8. The identity of Calliphora bezzii Zumpt, 1956 (Diptera, Calliphoridae).

    PubMed

    Rognes, Knut

    2016-09-26

    The holotype male of a nominal species described from Italy, Calliphora bezzii Zumpt, 1956, including a microscope slide of its terminalia, was examined. The holotype is shown to belong to the Nearctic taxon Calliphora latifrons Hough, 1899. Thus, Calliphora bezzii is a junior synonym of C. latifrons, syn. nov.

  9. Personality, Political Skill, and Job Performance

    ERIC Educational Resources Information Center

    Blickle, Gerhard; Meurs, James A.; Zettler, Ingo; Solga, Jutta; Noethen, Daniela; Kramer, Jochen; Ferris, Gerald R.

    2008-01-01

    Based on the socioanalytic perspective of performance prediction [Hogan, R. (1991). Personality and personality assessment. In M. D. Dunnette, L. Hough, (Eds.), "Handbook of industrial and organizational psychology" (2nd ed., pp. 873-919). Chicago: Rand McNally; Hogan, R., & Shelton, D. (1998). A socioanalytic perspective on job performance.…

  10. Three United States Army Manhunts: Insights From the Past

    DTIC Science & Technology

    2004-06-17

    raid ( Toulmin 1935, 85-88). Another source of guides and information were the Americans living in the state of Chihuahua, many of whom worked for...Harrisburg: The Military Service Publishing Company. Toulmin , H. A. 1935. With Pershing in Mexico. With a foreword by Benson W. Hough. Harrisburg: The

  11. The Value of Action Research in Middle Grades Education

    ERIC Educational Resources Information Center

    Caskey, Micki M.

    2006-01-01

    Action research is one of the relevant methodologies for addressing research questions and issues in middle grades education. Accounting for nearly 20% of published middle grades research studies (Hough, 2003), action research has emerged as an important and appropriate research method. In addition to reviewing the historical context, this article…

  12. Admiral Raymond A. Spruance: Lessons in Adaptation from the Pacific

    DTIC Science & Technology

    2010-04-30

    Association of the Class of 1907, 342. 54 Hough, 237. 55 Vlahos , Michael. The Blue Sword: The Naval War College and the American Mission, 1919-1941...Warfare: Theory and Practice. Newport, RI: U.S. Naval War College, 2009. Vlahos , Michael. The Blue Sword: The Naval War College and the American

  13. Arms Control and the Strategic Defense Initiative: Three Perspectives. Occasional Paper 36.

    ERIC Educational Resources Information Center

    Hough, Jerry F.; And Others

    Three perspectives on President Ronald Reagan's Strategic Defense Initiative (SDI), which is intended to defend U.S. targets from a Soviet nuclear attack, are presented in separate sections. In the first section, "Soviet Interpretation and Response," Jerry F. Hough examines possible reasons for Soviet preoccupation with SDI. He discusses…

  14. Hazardous sign detection for safety applications in traffic monitoring

    NASA Astrophysics Data System (ADS)

    Benesova, Wanda; Kottman, Michal; Sidla, Oliver

    2012-01-01

    The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.

  15. Motion Tracking and Identification of Unrestraint Gait Rehabilitation by Use of Elderly Support Robot

    NASA Astrophysics Data System (ADS)

    Nokata, Makoto; Hirai, Wataru; Itatani, Ryosuke

    This paper presents a robotic training system that can exercise the user without bodily restraint, neither markers nor sensors are attached to the trainee. We developed the robot system that has a total of four mounted components: a laser sensor, a camera, a cushion, and an electric motor. This paper have showed the method used for determining whether the trainee was bending forward or backward while walking, and the extent of the tilt, using the recorded image of the back of the trainee's head. A characteristic of our software algorithms has been that the image was divided into 9 quadrants, and each quadrant undergoes Hough transformation. We have verified experimentally that by using our algorithms for the four patterns of forward, backward, diagonally, and crouching, the tilt of the trainee's body have been accurately determined. We created a flowchart for determining the direction of movement according to experimental results. By adjusting the values used to make the distinction according to the position and the angle of the camera, and the width of the back of the trainee's head, we were able to accurately determine the walking condition of the trainee, and achieve early detection of the start of a fall.

  16. Real-time polarization imaging algorithm for camera-based polarization navigation sensors.

    PubMed

    Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli

    2017-04-10

    Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.

  17. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  18. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  19. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.

    PubMed

    Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina

    2016-12-01

    Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.

  20. Rear-end vision-based collision detection system for motorcyclists

    NASA Astrophysics Data System (ADS)

    Muzammel, Muhammad; Yusoff, Mohd Zuki; Meriaudeau, Fabrice

    2017-05-01

    In many countries, the motorcyclist fatality rate is much higher than that of other vehicle drivers. Among many other factors, motorcycle rear-end collisions are also contributing to these biker fatalities. To increase the safety of motorcyclists and minimize their road fatalities, this paper introduces a vision-based rear-end collision detection system. The binary road detection scheme contributes significantly to reduce the negative false detections and helps to achieve reliable results even though shadows and different lane markers are present on the road. The methodology is based on Harris corner detection and Hough transform. To validate this methodology, two types of dataset are used: (1) self-recorded datasets (obtained by placing a camera at the rear end of a motorcycle) and (2) online datasets (recorded by placing a camera at the front of a car). This method achieved 95.1% accuracy for the self-recorded dataset and gives reliable results for the rear-end vehicle detections under different road scenarios. This technique also performs better for the online car datasets. The proposed technique's high detection accuracy using a monocular vision camera coupled with its low computational complexity makes it a suitable candidate for a motorbike rear-end collision detection system.

  1. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    NASA Astrophysics Data System (ADS)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  2. Estimation of Bridge Height over Water from Polarimetric SAR Image Data Using Mapping and Projection Algorithm and De-Orientation Theory

    NASA Astrophysics Data System (ADS)

    Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo

    An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.

  3. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    PubMed Central

    García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  4. Complete vision-based traffic sign recognition supported by an I2V communication system.

    PubMed

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  5. Robust Spacecraft Component Detection in Point Clouds.

    PubMed

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  6. Energy-weighted dynamical scattering simulations of electron diffraction modalities in the scanning electron microscope.

    PubMed

    Pascal, Elena; Singh, Saransh; Callahan, Patrick G; Hourahine, Ben; Trager-Cowan, Carol; Graef, Marc De

    2018-04-01

    Transmission Kikuchi diffraction (TKD) has been gaining momentum as a high resolution alternative to electron back-scattered diffraction (EBSD), adding to the existing electron diffraction modalities in the scanning electron microscope (SEM). The image simulation of any of these measurement techniques requires an energy dependent diffraction model for which, in turn, knowledge of electron energies and diffraction distances distributions is required. We identify the sample-detector geometry and the effect of inelastic events on the diffracting electron beam as the important factors to be considered when predicting these distributions. However, tractable models taking into account inelastic scattering explicitly are lacking. In this study, we expand the Monte Carlo (MC) energy-weighting dynamical simulations models used for EBSD [1] and ECP [2] to the TKD case. We show that the foil thickness in TKD can be used as a means of energy filtering and compare band sharpness in the different modalities. The current model is shown to correctly predict TKD patterns and, through the dictionary indexing approach, to produce higher quality indexed TKD maps than conventional Hough transform approach, especially close to grain boundaries. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Computer-Aided Diagnosis of Anterior Segment Eye Abnormalities using Visible Wavelength Image Analysis Based Machine Learning.

    PubMed

    S V, Mahesh Kumar; R, Gunasundari

    2018-06-02

    Eye disease is a major health problem among the elderly people. Cataract and corneal arcus are the major abnormalities that exist in the anterior segment eye region of aged people. Hence, computer-aided diagnosis of anterior segment eye abnormalities will be helpful for mass screening and grading in ophthalmology. In this paper, we propose a multiclass computer-aided diagnosis (CAD) system using visible wavelength (VW) eye images to diagnose anterior segment eye abnormalities. In the proposed method, the input VW eye images are pre-processed for specular reflection removal and the iris circle region is segmented using a circular Hough Transform (CHT)-based approach. The first-order statistical features and wavelet-based features are extracted from the segmented iris circle and used for classification. The Support Vector Machine (SVM) by Sequential Minimal Optimization (SMO) algorithm was used for the classification. In experiments, we used 228 VW eye images that belong to three different classes of anterior segment eye abnormalities. The proposed method achieved a predictive accuracy of 96.96% with 97% sensitivity and 99% specificity. The experimental results show that the proposed method has significant potential for use in clinical applications.

  8. Image processing-based framework for continuous lane recognition in mountainous roads for driver assistance system

    NASA Astrophysics Data System (ADS)

    Manoharan, Kodeeswari; Daniel, Philemon

    2017-11-01

    This paper presents a robust lane detection technique for roads on hilly terrain. The target of this paper is to utilize image processing strategies to recognize lane lines on structured mountain roads with the help of improved Hough transform. Vision-based approach is used as it performs well in a wide assortment of circumstances by abstracting valuable information contrasted with other sensors. The proposed strategy processes the live video stream, which is a progression of pictures, and concentrates on the position of lane markings in the wake of sending the edges through different channels and legitimate thresholding. The algorithm is tuned for Indian mountainous curved and paved roads. A technique of computation is utilized to discard the disturbing lines other than the credible lane lines and show just the required prevailing lane lines. This technique will consequently discover two lane lines that are nearest to the vehicle in a picture as right on time as could reasonably be expected. Various video sequences on hilly terrain are tested to verify the effectiveness of our method, and it has shown good performance with a detection accuracy of 91.89%.

  9. Robust Spacecraft Component Detection in Point Clouds

    PubMed Central

    Wei, Quanmao; Jiang, Zhiguo

    2018-01-01

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828

  10. Estimating the coordinates of pillars and posts in the parking lots for intelligent parking assist system

    NASA Astrophysics Data System (ADS)

    Choi, Jae Hyung; Kuk, Jung Gap; Kim, Young Il; Cho, Nam Ik

    2012-01-01

    This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.

  11. Automatic detection of zebra crossings from mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  12. Assessing Multiple Methods for Determining Active Source Travel Times in a Dense Array

    NASA Astrophysics Data System (ADS)

    Parker, L.; Zeng, X.; Thurber, C. H.; Team, P.

    2016-12-01

    238 three-component nodal seismometers were deployed at the Brady Hot Springs geothermal field in Nevada to characterize changes in the subsurface as a result of changes in pumping conditions. The array consisted of a 500 meter by 1600 meter irregular grid with 50 meter spacing centered in an approximately rectangular 1200 meter by 1600 meter grid with 200 meter spacing. A large vibroseis truck (T-Rex) was deployed as an active seismic source at 216 locations. Over the course of 15 days, the truck occupied each location up to four times. At each location a swept-frequency source between 5 and 80 Hz over 20 seconds was produced using three vibration modes: longitudinal S-wave, transverse S-wave, and P-wave. Seismic wave arrivals were identified using three methods: cross-correlation, deconvolution, and Wigner-Ville distribution (WVD) plus the Hough Transform (HT). Surface wave arrivals were clear for all three modes of vibration using all three methods. Preliminary tomographic models will be presented, using the arrivals of the identified phases. This analysis is part of the PoroTomo project: Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology; http://geoscience.wisc.edu/feigl/porotomo.

  13. Extensions of algebraic image operators: An approach to model-based vision

    NASA Technical Reports Server (NTRS)

    Lerner, Bao-Ting; Morelli, Michael V.

    1990-01-01

    Researchers extend their previous research on a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, researchers devised an innovative, efficient edge detection scheme. An accurate method for deriving gradient component information from this edge detector is presented. Based upon this new edge detection system researchers developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The algebraic operators are global operations which are easily reconfigured to operate on any size or shape region. This provides a natural platform from which to pursue dynamic scene analysis. A method for optimizing the linear feature extractor which capitalizes on the spatially reconfiguration nature of the edge detector/gradient component operator is discussed.

  14. Portable bacterial identification system based on elastic light scatter patterns.

    PubMed

    Bae, Euiwon; Ying, Dawei; Kramer, Donald; Patsekin, Valery; Rajwa, Bartek; Holdman, Cheryl; Sturgis, Jennifer; Davisson, V Jo; Robinson, J Paul

    2012-08-28

    Conventional diagnosis and identification of bacteria requires shipment of samples to a laboratory for genetic and biochemical analysis. This process can take days and imposes significant delay to action in situations where timely intervention can save lives and reduce associated costs. To enable faster response to an outbreak, a low-cost, small-footprint, portable microbial-identification instrument using forward scatterometry has been developed. This device, weighing 9 lb and measuring 12 × 6 × 10.5 in., utilizes elastic light scatter (ELS) patterns to accurately capture bacterial colony characteristics and delivers the classification results via wireless access. The overall system consists of two CCD cameras, one rotational and one translational stage, and a 635-nm laser diode. Various software algorithms such as Hough transform, 2-D geometric moments, and the traveling salesman problem (TSP) have been implemented to provide colony count and circularity, centering process, and minimized travel time among colonies. Experiments were conducted with four bacteria genera using pure and mixed plate and as proof of principle a field test was conducted in four different locations where the average classification rate ranged between 95 and 100%.

  15. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    NASA Astrophysics Data System (ADS)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  16. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  17. FLICC/FEDLINK Conference on Making Library Automation Choices (Washington, D.C., May 6, 1986).

    ERIC Educational Resources Information Center

    Landrum, Hollis

    This report of a conference convened by the Federal Library and Information Center Committee (FLICC) Subcommittee on Education provides brief summaries of four panel discussions conducted by 15 federal librarians who had assembled automation systems for their agencies' libraries and information centers. The first panel, consisting of Dean Hough,…

  18. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  19. Automatic Coregistration for Multiview SAR Images in Urban Areas

    NASA Astrophysics Data System (ADS)

    Xiang, Y.; Kang, W.; Wang, F.; You, H.

    2017-09-01

    Due to the high resolution property and the side-looking mechanism of SAR sensors, complex buildings structures make the registration of SAR images in urban areas becomes very hard. In order to solve the problem, an automatic and robust coregistration approach for multiview high resolution SAR images is proposed in the paper, which consists of three main modules. First, both the reference image and the sensed image are segmented into two parts, urban areas and nonurban areas. Urban areas caused by double or multiple scattering in a SAR image have a tendency to show higher local mean and local variance values compared with general homogeneous regions due to the complex structural information. Based on this criterion, building areas are extracted. After obtaining the target regions, L-shape structures are detected using the SAR phase congruency model and Hough transform. The double bounce scatterings formed by wall and ground are shown as strong L- or T-shapes, which are usually taken as the most reliable indicator for building detection. According to the assumption that buildings are rectangular and flat models, planimetric buildings are delineated using the L-shapes, then the reconstructed target areas are obtained. For the orignal areas and the reconstructed target areas, the SAR-SIFT matching algorithm is implemented. Finally, correct corresponding points are extracted by the fast sample consensus (FSC) and the transformation model is also derived. The experimental results on a pair of multiview TerraSAR images with 1-m resolution show that the proposed approach gives a robust and precise registration performance, compared with the orignal SAR-SIFT method.

  20. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  1. Assessing Native American disturbances in mixed oak forests of the Allegheny Plateau

    Treesearch

    Charles M. Ruffner; Andrew Sluyter; Marc D. Abrams; Charlie Crothers; Jack McLaughlin; Richard Kandare

    1997-01-01

    Although much has been written concerning the ecology and disturbance history of hemlock - white pine - northern hardwood (Nichols 1935; Braun 1950) forests of the Allegheny Plateau (Lutz 1930a; Morey 1936; Hough and Forbes 1943; Runkle 1981 ; Whitney 1990; Abrams and Owig 1996) few studies have investigated the distribution and successional dynamics of oak in this...

  2. Can a District-Level Teacher Salary Incentive Policy Improve Teacher Recruitment and Retention? Policy Brief 13-4

    ERIC Educational Resources Information Center

    Hough, Heather J.; Loeb, Susanna

    2013-01-01

    In this policy brief, Heather Hough and Susanna Loeb examine the effect of the Quality Teacher and Education Act of 2008 (QTEA) on teacher recruitment, retention, and overall teacher quality in the San Francisco Unified School District (SFUSD). They provide evidence that a salary increase can improve a school district's attractiveness within their…

  3. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  4. Mixing rates and limit theorems for random intermittent maps

    NASA Astrophysics Data System (ADS)

    Bahsoun, Wael; Bose, Christopher

    2016-04-01

    We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.

  5. The Great Tunes of the Hough: Music and Song in Alan Garner's "The Stone Book Quartet "

    ERIC Educational Resources Information Center

    Godek, Sarah

    2004-01-01

    Although song and music are often elements in children's books, little critical attention has gone into examining their literary uses. Alan Garner's "The Stone Book Quartet" is an example of four texts for children in which music plays a vital role. The several snatches of traditional songs found throughout the quartet bring to life the culture of…

  6. FILM FORMAT AND FIDUCIAL MARKS OF THE 20$sub 4$ BUBBLE CHAMBER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, E.L.

    1962-12-31

    A description is given of the 20-in. bubble chamber film format. The film format consists of: chamber image; Arabic picture number; binary data box; Arabic view number; and the Hough-Powell road fiducial marks. The fiducial marks and their relation to the chamber optical constants are discussed. The constants are based on the standard measuring fiducials a and d. (P.C.H.)

  7. Development and Evaluation of a Success Index for Professionals in Postgraduate Training Programs

    DTIC Science & Technology

    1993-02-26

    15 Predicting Success among Program Participants .... ......... .. 16 AEGD Success and Career Success .......... ................ .. 16...10), and general career success (8). Hough applied the principle of behavioral consistency and aspects of the biographical inventory to develop and...the opportunity to evaluate how measures of success in AEGD translate into career success . The 90 AERs were reviewed by two experienced senior dental

  8. InSight Prelaunch Briefing

    NASA Image and Video Library

    2018-05-03

    Col. Michael Hough, Commander 30th Space Wing, Vandenberg Air Force Base, discusses NASA's InSight mission during a prelaunch media briefing, Thursday, May 3, 2018, at Vandenberg Air Force Base in California. InSight, short for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, is a Mars lander designed to study the "inner space" of Mars: its crust, mantle, and core. Photo Credit: (NASA/Bill Ingalls)

  9. The space transformation in the simulation of multidimensional random fields

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    Space transformations are proposed as a mathematically meaningful and practically comprehensive approach to simulate multidimensional random fields. Within this context the turning bands method of simulation is reconsidered and improved in both the space and frequency domains. ?? 1987.

  10. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  11. A new FOD recognition algorithm based on multi-source information fusion and experiment analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Xiao, Gang

    2011-08-01

    Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.

  12. A cyclostrophic transformed Eulerian zonal mean model for the middle atmosphere of slowly rotating planets

    NASA Astrophysics Data System (ADS)

    Li, K. F.; Yao, K.; Taketa, C.; Zhang, X.; Liang, M. C.; Jiang, X.; Newman, C. E.; Tung, K. K.; Yung, Y. L.

    2015-12-01

    With the advance of modern computers, studies of planetary atmospheres have heavily relied on general circulation models (GCMs). Because these GCMs are usually very complicated, the simulations are sometimes difficult to understand. Here we develop a semi-analytic zonally averaged, cyclostrophic residual Eulerian model to illustrate how some of the large-scale structures of the middle atmospheric circulation can be explained qualitatively in terms of simple thermal (e.g. solar heating) and mechanical (the Eliassen-Palm flux divergence) forcings. This model is a generalization of that for fast rotating planets such as the Earth, where geostrophy dominates (Andrews and McIntyre 1987). The solution to this semi-analytic model consists of a set of modified Hough functions of the generalized Laplace's tidal equation with the cyclostrohpic terms. As examples, we apply this model to Titan and Venus. We show that the seasonal variations of the temperature and the circulation of these slowly-rotating planets can be well reproduced by adjusting only three parameters in the model: the Brunt-Väisälä bouyancy frequency, the Newtonian radiative cooling rate, and the Rayleigh friction damping rate. We will also discuss the application of this model to study the meridional transport of photochemically produced tracers that can be observed by space instruments.

  13. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  14. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  15. Automatic firearm class identification from cartridge cases

    NASA Astrophysics Data System (ADS)

    Kamalakannan, Sridharan; Mann, Christopher J.; Bingham, Philip R.; Karnowski, Thomas P.; Gleason, Shaun S.

    2011-03-01

    We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach.

  16. Efficient and automatic image reduction framework for space debris detection based on GPU technology

    NASA Astrophysics Data System (ADS)

    Diprima, Francesco; Santoni, Fabio; Piergentili, Fabrizio; Fortunato, Vito; Abbattista, Cristoforo; Amoruso, Leonardo

    2018-04-01

    In the last years, the increasing number of space debris has triggered the need of a distributed monitoring system for the prevention of possible space collisions. Space surveillance based on ground telescope allows the monitoring of the traffic of the Resident Space Objects (RSOs) in the Earth orbit. This space debris surveillance has several applications such as orbit prediction and conjunction assessment. In this paper is proposed an optimized and performance-oriented pipeline for sources extraction intended to the automatic detection of space debris in optical data. The detection method is based on the morphological operations and Hough Transform for lines. Near real-time detection is obtained using General Purpose computing on Graphics Processing Units (GPGPU). The high degree of processing parallelism provided by GPGPU allows to split data analysis over thousands of threads in order to process big datasets with a limited computational time. The implementation has been tested on a large and heterogeneous images data set, containing both imaging satellites from different orbit ranges and multiple observation modes (i.e. sidereal and object tracking). These images were taken during an observation campaign performed from the EQUO (EQUatorial Observatory) observatory settled at the Broglio Space Center (BSC) in Kenya, which is part of the ASI-Sapienza Agreement.

  17. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  18. Diffuse optical tomography using semiautomated coregistered ultrasound measurements

    NASA Astrophysics Data System (ADS)

    Mostafa, Atahar; Vavadi, Hamed; Uddin, K. M. Shihab; Zhu, Quing

    2017-12-01

    Diffuse optical tomography (DOT) has demonstrated huge potential in breast cancer diagnosis and treatment monitoring. DOT image reconstruction guided by ultrasound (US) improves the diffused light localization and lesion reconstruction accuracy. However, DOT reconstruction depends on tumor geometry provided by coregistered US. Experienced operators can manually measure these lesion parameters; however, training and measurement time are needed. The wide clinical use of this technique depends on its robustness and faster imaging reconstruction capability. This article introduces a semiautomated procedure that automatically extracts lesion information from US images and incorporates it into the optical reconstruction. An adaptive threshold-based image segmentation is used to obtain tumor boundaries. For some US images, posterior shadow can extend to the chest wall and make the detection of deeper lesion boundary difficult. This problem can be solved using a Hough transform. The proposed procedure was validated from data of 20 patients. Optical reconstruction results using the proposed procedure were compared with those reconstructed using extracted tumor information from an experienced user. Mean optical absorption obtained from manual measurement was 0.21±0.06 cm-1 for malignant and 0.12±0.06 cm-1 for benign cases, whereas for the proposed method it was 0.24±0.08 cm-1 and 0.12±0.05 cm-1, respectively.

  19. Development of advanced image analysis techniques for the in situ characterization of multiphase dispersions occurring in bioreactors.

    PubMed

    Galindo, Enrique; Larralde-Corona, C Patricia; Brito, Teresa; Córdova-Aguilar, Ma Soledad; Taboada, Blanca; Vega-Alvarado, Leticia; Corkidi, Gabriel

    2005-03-30

    Fermentation bioprocesses typically involve two liquid phases (i.e. water and organic compounds) and one gas phase (air), together with suspended solids (i.e. biomass), which are the components to be dispersed. Characterization of multiphase dispersions is required as it determines mass transfer efficiency and bioreactor homogeneity. It is also needed for the appropriate design of contacting equipment, helping in establishing optimum operational conditions. This work describes the development of image analysis based techniques with advantages (in terms of data acquisition and processing), for the characterization of oil drops and bubble diameters in complex simulated fermentation broths. The system consists of fully digital acquisition of in situ images obtained from the inside of a mixing tank using a CCD camera synchronized with a stroboscopic light source, which are processed with a versatile commercial software. To improve the automation of particle recognition and counting, the Hough transform (HT) was used, so bubbles and oil drops were automatically detected and the processing time was reduced by 55% without losing accuracy with respect to a fully manual analysis. The system has been used for the detailed characterization of a number of operational conditions, including oil content, biomass morphology, presence of surfactants (such as proteins) and viscosity of the aqueous phase.

  20. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  1. A new code for automatic detection and analysis of the lineament patterns for geophysical and geological purposes (ADALGEO)

    NASA Astrophysics Data System (ADS)

    Soto-Pinto, C.; Arellano-Baeza, A.; Sánchez, G.

    2013-08-01

    We present a new numerical method for automatic detection and analysis of changes in lineament patterns caused by seismic and volcanic activities. The method is implemented as a series of modules: (i) normalization of the image contrast, (ii) extraction of small linear features (stripes) through convolution of the part of the image in the vicinity of each pixel with a circular mask or through Canny algorithm, and (iii) posterior detection of main lineaments using the Hough transform. We demonstrate that our code reliably detects changes in the lineament patterns related to the stress evolution in the Earth's crust: specifically, a significant number of new lineaments appear approximately one month before an earthquake, while one month after the earthquake the lineament configuration returns to its initial state. Application of our software to the deformations caused by volcanic activity yields the opposite results: the number of lineaments decreases with the onset of microseismicity. This discrepancy can be explained assuming that the plate tectonic earthquakes are caused by the compression and accumulation of stress in the Earth's crust due to subduction of tectonic plates, whereas in the case of volcanic activity we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion and the resulting stretching of the surface.

  2. Online Data Reduction for the Belle II Experiment using DATCON

    NASA Astrophysics Data System (ADS)

    Bernlochner, Florian; Deschamps, Bruno; Dingfelder, Jochen; Marinas, Carlos; Wessel, Christian

    2017-08-01

    The new Belle II experiment at the asymmetric e+e-accelerator SuperKEKB at KEK in Japan is designed to deliver a peak luminosity of 8 × 1035cm-2s-1. To perform high-precision track reconstruction, e.g. for measurements of time-dependent CP-violating decays and secondary vertices, the Belle II detector is equipped with a highly segmented pixel detector (PXD). The high instantaneous luminosity and short bunch crossing times result in a large stream of data in the PXD, which needs to be significantly reduced for offline storage. The data reduction is performed using an FPGA-based Data Acquisition Tracking and Concentrator Online Node (DATCON), which uses information from the Belle II silicon strip vertex detector (SVD) surrounding the PXD to carry out online track reconstruction, extrapolation to the PXD, and Region of Interest (ROI) determination on the PXD. The data stream is reduced by a factor of ten with an ROI finding efficiency of >90% for PXD hits inside the ROI down to 50MeV in pT of the stable particles. We will present the current status of the implementation of the track reconstruction using Hough transformations, and the results obtained for simulated ϒ(4S) → BB¯ events.

  3. A cyclostrophic transformed Eulerian zonal mean model for the middle atmosphere of slowly rotating planets

    NASA Astrophysics Data System (ADS)

    Li, King-Fai; Yao, Kaixuan; Taketa, Cameron; Zhang, Xi; Liang, Mao-Chang; Jiang, Xun; Newman, Claire; Tung, Ka-Kit; Yung, Yuk L.

    2016-04-01

    With the advance of modern computers, studies of planetary atmospheres have heavily relied on general circulation models (GCMs). Because these GCMs are usually very complicated, the simulations are sometimes difficult to understand. Here we develop a semi-analytic zonally averaged, cyclostrophic residual Eulerian model to illustrate how some of the large-scale structures of the middle atmospheric circulation can be explained qualitatively in terms of simple thermal (e.g. solar heating) and mechanical (the Eliassen-Palm flux divergence) forcings. This model is a generalization of that for fast rotating planets such as the Earth, where geostrophy dominates (Andrews and McIntyre 1987). The solution to this semi-analytic model consists of a set of modified Hough functions of the generalized Laplace's tidal equation with the cyclostrohpic terms. As an example, we apply this model to Titan. We show that the seasonal variations of the temperature and the circulation of these slowly-rotating planets can be well reproduced by adjusting only three parameters in the model: the Brunt-Väisälä bouyancy frequency, the Newtonian radiative cooling rate, and the Rayleigh friction damping rate. We will also discuss an application of this model to study the meridional transport of photochemically produced tracers that can be observed by space instruments.

  4. Quasi real-time analysis of mixed-phase clouds using interferometric out-of-focus imaging: development of an algorithm to assess liquid and ice water content

    NASA Astrophysics Data System (ADS)

    Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.

    2015-12-01

    According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.

  5. Detection of Hard Exudates in Colour Fundus Images Using Fuzzy Support Vector Machine-Based Expert System.

    PubMed

    Jaya, T; Dheeba, J; Singh, N Albert

    2015-12-01

    Diabetic retinopathy is a major cause of vision loss in diabetic patients. Currently, there is a need for making decisions using intelligent computer algorithms when screening a large volume of data. This paper presents an expert decision-making system designed using a fuzzy support vector machine (FSVM) classifier to detect hard exudates in fundus images. The optic discs in the colour fundus images are segmented to avoid false alarms using morphological operations and based on circular Hough transform. To discriminate between the exudates and the non-exudates pixels, colour and texture features are extracted from the images. These features are given as input to the FSVM classifier. The classifier analysed 200 retinal images collected from diabetic retinopathy screening programmes. The tests made on the retinal images show that the proposed detection system has better discriminating power than the conventional support vector machine. With the best combination of FSVM and features sets, the area under the receiver operating characteristic curve reached 0.9606, which corresponds to a sensitivity of 94.1% with a specificity of 90.0%. The results suggest that detecting hard exudates using FSVM contribute to computer-assisted detection of diabetic retinopathy and as a decision support system for ophthalmologists.

  6. Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas

    PubMed Central

    Moreau, Julien; Ambellouis, Sébastien; Ruichek, Yassine

    2017-01-01

    A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density). PMID:28106746

  7. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  8. Identifying Green Infrastructure from Social Media and Crowdsourcing- An Image Based Machine-Learning Approach.

    NASA Astrophysics Data System (ADS)

    Rai, A.; Minsker, B. S.

    2016-12-01

    In this work we introduce a novel dataset GRID: GReen Infrastructure Detection Dataset and a framework for identifying urban green storm water infrastructure (GI) designs (wetlands/ponds, urban trees, and rain gardens/bioswales) from social media and satellite aerial images using computer vision and machine learning methods. Along with the hydrologic benefits of GI, such as reducing runoff volumes and urban heat islands, GI also provides important socio-economic benefits such as stress recovery and community cohesion. However, GI is installed by many different parties and cities typically do not know where GI is located, making study of its impacts or siting new GI difficult. We use object recognition learning methods (template matching, sliding window approach, and Random Hough Forest method) and supervised machine learning algorithms (e.g., support vector machines) as initial screening approaches to detect potential GI sites, which can then be investigated in more detail using on-site surveys. Training data were collected from GPS locations of Flickr and Instagram image postings and Amazon Mechanical Turk identification of each GI type. Sliding window method outperformed other methods and achieved an average F measure, which is combined metric for precision and recall performance measure of 0.78.

  9. Cost Per Flying Hour Analysis of the C-141

    DTIC Science & Technology

    1997-09-01

    Government Printing Office, 1996. Horngren , Charles T. Cost Accounting : A Managerial Emphasis (Eighth Edition). New Jersey: Prentice Hall, 1994. Hough...standard accounting techniques. This analysis of AMC’s current costs and their applicability to the price charged to the customer shall be the focus of... Horngren et al.,1994:864). There are three generally recognized methods of determining a transfer price (Arnstein and Gilabert, 1980:189). Cost based

  10. U.S. Army Classification Research Panel: Conclusions and Recommendations on Classification Research Strategies

    DTIC Science & Technology

    2007-05-01

    criteria, specifically occupational and organizational retention criteria; and (c) indices of career success (cf. Barrick & Mount, 1991; Hogan & Holland... career success (cf. Barrick & Mount, 1991; Hogan & Holland, 2003; Hough & Furnham, 2003; Hurtz & Donovan, 2000; Judge et al., 1999; Ozer, & Benet...traits, general mental ability, and career success across the life span. Personnel Psychology, 52, 621-652. Knapp, D. J., & Campbell, R. C. (Eds.) (2006

  11. Literature Review: Cognitive Abilities--Theory, History, and Validity

    DTIC Science & Technology

    1991-02-01

    Note 88-13. (AD A193 558) Literature Review: Utility of Temperament, Biodata. and Interest Assessment for Predicting Job Performance by Leaetta M. Hough...predicting soldiers’ job performance, and then to develop new measures for those attributes. These Research Notes, however, have usefulness beyond that...organization or taxonomy of the constructs in each area, and the validities of the various measures for different types of job perfor- mance criteria. Second

  12. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  13. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  14. From Networks to Time Series

    NASA Astrophysics Data System (ADS)

    Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi

    2012-10-01

    In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.

  15. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  16. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  17. Digital double random amplitude image encryption method based on the symmetry property of the parametric discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Bekkouche, Toufik; Bouguezel, Saad

    2018-03-01

    We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.

  18. Literature Review: Validity and Potential Usefulness of Psychomotor Ability Tests for Personnel Selection and Classification

    DTIC Science & Technology

    1988-04-01

    cognitive-purmpttidl JDTTTty tests. Taken together, these /ffndings suggest a need for further psychomotor test development and validation research...Leaetta N. Hough (ed.). The findings presented in these documents were used in the development of a battery of new tests and Inventories for use In...Project A. The focus of that development effort was to Identify abilities and other human attri- butes that seemed "best bets" for predicting

  19. The Crimea and the Donbass in Flames: The Influence of Russian Propaganda and the Ukraine Crisis

    DTIC Science & Technology

    2016-09-01

    RUSSIAN PROPAGANDA AND THE UKRAINE CRISIS 5. FUNDING NUMBERS 6. AUTHOR(S) James T. Hough 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES...establish a new norm or gives new significance to an old one. F. THESIS OVERVIEW AND DRAFT CHAPTER OUTLINE This thesis is organized into four

  20. Parallel Vision Algorithm Design and Implementation 1988 End of Year Report

    DTIC Science & Technology

    1989-08-01

    as a local operation, the provided C code used raster order processing to speed up execution time. This made it impossible to implement the code using...Apply, which does not allow the programmer to take advantage of raster order processing . Therefore, the 5x5 median filter algorithm was a straight...possible to exploit raster- order processing in W2, giving greater efficiency. The first advantage is the reason that connected components and the Hough

  1. California quake assessed

    NASA Astrophysics Data System (ADS)

    Wuethrich, Bernice

    On January 17, at 4:31 A.M., a 6.6 magnitude earthquake hit the Los Angeles area, crippling much of the local infrastructure and claiming 51 lives. Members of the Southern California Earthquake Network, a consortium of scientists at universities and the United States Geological Survey (USGS), entered a controlled crisis mode. Network scientists, including David Wald, Susan Hough, Kerry Sieh, and a half dozen others went into the field to gather information on the earthquake, which apparently ruptured an unmapped fault.

  2. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  3. Manchester visual query language

    NASA Astrophysics Data System (ADS)

    Oakley, John P.; Davis, Darryl N.; Shann, Richard T.

    1993-04-01

    We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.

  4. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    NASA Astrophysics Data System (ADS)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-09-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.

  5. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  6. Accurate detection of blood vessels improves the detection of exudates in color fundus images.

    PubMed

    Youssef, Doaa; Solouma, Nahed H

    2012-12-01

    Exudates are one of the earliest and most prevalent symptoms of diseases leading to blindness such as diabetic retinopathy and macular degeneration. Certain areas of the retina with such conditions are to be photocoagulated by laser to stop the disease progress and prevent blindness. Outlining these areas is dependent on outlining the lesions and the anatomic structures of the retina. In this paper, we provide a new method for the detection of blood vessels that improves the detection of exudates in fundus photographs. The method starts with an edge detection algorithm which results in a over segmented image. Then the new feature-based algorithm can be used to accurately detect the blood vessels. This algorithm considers the characteristics of a retinal blood vessel such as its width range, intensities and orientations for the purpose of selective segmentation. Because of its bulb shape and its color similarity with exudates, the optic disc can be detected using the common Hough transform technique. The extracted blood vessel tree and optic disc could be subtracted from the over segmented image to get an initial estimate of exudates. The final estimation of exudates can then be obtained by morphological reconstruction based on the appearance of exudates. This method is shown to be promising since it increases the sensitivity and specificity of exudates detection to 80% and 100% respectively. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Mathematical imaging methods for mitosis analysis in live-cell phase contrast microscopy.

    PubMed

    Grah, Joana Sarah; Harrington, Jennifer Alison; Koh, Siang Boon; Pike, Jeremy Andrew; Schreiner, Alexander; Burger, Martin; Schönlieb, Carola-Bibiane; Reichelt, Stefanie

    2017-02-15

    In this paper we propose a workflow to detect and track mitotic cells in time-lapse microscopy image sequences. In order to avoid the requirement for cell lines expressing fluorescent markers and the associated phototoxicity, phase contrast microscopy is often preferred over fluorescence microscopy in live-cell imaging. However, common specific image characteristics complicate image processing and impede use of standard methods. Nevertheless, automated analysis is desirable due to manual analysis being subjective, biased and extremely time-consuming for large data sets. Here, we present the following workflow based on mathematical imaging methods. In the first step, mitosis detection is performed by means of the circular Hough transform. The obtained circular contour subsequently serves as an initialisation for the tracking algorithm based on variational methods. It is sub-divided into two parts: in order to determine the beginning of the whole mitosis cycle, a backwards tracking procedure is performed. After that, the cell is tracked forwards in time until the end of mitosis. As a result, the average of mitosis duration and ratios of different cell fates (cell death, no division, division into two or more daughter cells) can be measured and statistics on cell morphologies can be obtained. All of the tools are featured in the user-friendly MATLAB®Graphical User Interface MitosisAnalyser. Copyright © 2017. Published by Elsevier Inc.

  8. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  9. Kalman Filter Tracking on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-11-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  10. White blood cell counting analysis of blood smear images using various segmentation strategies

    NASA Astrophysics Data System (ADS)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  11. A Method for Characterizing Phenotypic Changes in Highly Variable Cell Populations and its Application to High Content Screening of Arabidopsis thaliana Protoplastsa

    PubMed Central

    Johnson, Gregory R.; Kangas, Joshua D.; Dovzhenko, Alexander; Trojok, Rüdiger; Voigt, Karsten; Majarian, Timothy D.; Palme, Klaus; Murphy, Robert F.

    2017-01-01

    Quantitative image analysis procedures are necessary for the automated discovery of effects of drug treatment in large collections of fluorescent micrographs. When compared to their mammalian counterparts, the effects of drug conditions on protein localization in plant species are poorly understood and underexplored. To investigate this relationship, we generated a large collection of images of single plant cells after various drug treatments. For this, protoplasts were isolated from six transgenic lines of A. thaliana expressing fluorescently tagged proteins. Nine drugs at three concentrations were applied to protoplast cultures followed by automated image acquisition. For image analysis, we developed a cell segmentation protocol for detecting drug effects using a Hough-transform based region of interest detector and a novel cross-channel texture feature descriptor. In order to determine treatment effects, we summarized differences between treated and untreated experiments with an L1 Cramér-von Mises statistic. The distribution of these statistics across all pairs of treated and untreated replicates was compared to the variation within control replicates to determine the statistical significance of observed effects. Using this pipeline, we report the dose dependent drug effects in the first high-content Arabidopsis thaliana drug screen of its kind. These results can function as a baseline for comparison to other protein organization modeling approaches in plant cells. PMID:28245335

  12. A Study of Lane Detection Algorithm for Personal Vehicle

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke

    By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.

  13. Semi-automatic object geometry estimation for image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-01-01

    Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.

  14. Management of Patients With Histologic Transformation.

    PubMed

    Reddy, Nishitha M

    2017-07-01

    The incidence of histological transformation is up to 30% over a period of 10 years. This risk persists even beyond the initial decade of diagnosis of an indent lymphoma. In this era of emerging novel therapies, one could hope for an improved survival. There are currently no randomized trials guiding therapy for transformed lymphoma. Treatment recommendations are based on observational studies or non-randomized single arm clinical trials. To that extent, although routinely recommended and performed at transplant centers, voluminous evidence to suggest the timing or type (autologous or allogeneic) of transplant is lacking. In this article, we discuss the clinical features, treatment approach and role of stem cell transplant in transformed lymphoma. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  16. Fidelity under isospectral perturbations: a random matrix study

    NASA Astrophysics Data System (ADS)

    Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.

    2013-07-01

    The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.

  17. The Impact of an Intelligent Computer-Based Tutor on Classroom Social Processes: An Ethnographic Study.

    DTIC Science & Technology

    1993-02-11

    of computer games such as Where in the World is Carmen Santiago ?, provided a milieu in which these boys could fantasize about their prowess and...Organization for Economic Pacific Bell Cooperation and Development 2600 Camino Ramon 2, rue Andru-Pascal Room 3S-450 75016 PARIS San Ramon, CA 94583 FRANCE...Psychology University of Delaware Dr. Arthur Melmed Newark, DE 19711 Computer Arts and Education Laboratory Ms. Julia S. Hough New York University 110 W

  18. Prospective Assessment of Neurocognition in Future Gulf-Deployed and Gulf-Nondeployed Military Personnel: A Pilot Study

    DTIC Science & Technology

    2008-02-01

    CW, Castro CA, Messer SC, McGurk D, Cotting DI, Koffman RL : Combat duty in Iraq and Afghanistan: mental health problems and barriers to care. N Eng...JA, Hough RL , Jordan BK, Marmar CR, et al. Trauma and the Vietnam war generation: Report of findings from the National Vietnam Veterans Readjustment...Castro CA, Messer SC, McGurk D, Cotting DI, Koffman RL . Combat duty in Iraq and Afghanistan, mental health problems, and barriers to care. N Engl J Med

  19. InSight Prelaunch Briefing

    NASA Image and Video Library

    2018-05-03

    Col. Michael Hough, Commander 30th Space Wing, Vandenberg Air Force Base, left, and 1st Lieutenant Kristina Williams, weather officer, 30th Space Wing, Vandenberg Air Force Base, discuss NASA's InSight mission during a prelaunch media briefing, Thursday, May 3, 2018, at Vandenberg Air Force Base in California. InSight, short for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, is a Mars lander designed to study the "inner space" of Mars: its crust, mantle, and core. Photo Credit: (NASA/Bill Ingalls)

  20. Fabrication of angleply carbon-aluminum composites

    NASA Technical Reports Server (NTRS)

    Novak, R. C.

    1974-01-01

    A study was conducted to fabricate and test angleply composite consisting of NASA-Hough carbon base monofilament in a matrix of 2024 aluminum. The effect of fabrication variables on the tensile properties was determined, and an optimum set of conditions was established. The size of the composite panels was successfully scaled up, and the material was tested to measure tensile behavior as a function of temperature, stress-rupture and creep characteristics at two elevated temperatures, bending fatigue behavior, resistance to thermal cycling, and Izod impact response.

  1. Organizing for Effective Joint Warfare: A Deductive Analysis of U.S. Armed Forces Joint Doctrine

    DTIC Science & Technology

    1993-06-18

    8217, Unpublished SAMS Monogram , (U.S. Army Command and General Staff College, Fort Leavenworth, KS: i988) p. 25. 3. Frank 0. Hough, et al, History of...SAMS Monogram , (U.S. Army Command and General Staff College, Fort Leavenworth, KS: 1988) , p, 16 and p. 29; and William 0. Pierce, ’Span of Control and...The Operational Commandetr: Is it More Than Just a Nun:ber?", Unpublished SAMS Monogram , (U.S. Army Command and Ueneral Staff College, Fort Leavenworth

  2. Computation of transform domain covariance matrices

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1975-01-01

    It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.

  3. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  4. Digital Sound Encryption with Logistic Map and Number Theoretic Transform

    NASA Astrophysics Data System (ADS)

    Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT

    2018-03-01

    Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.

  5. Improved decryption quality and security of a joint transform correlator-based encryption system

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet

    2013-02-01

    Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption-decryption process. Numerical results are presented and discussed.

  6. Nonuniform sampling theorems for random signals in the linear canonical transform domain

    NASA Astrophysics Data System (ADS)

    Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang

    2018-06-01

    Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.

  7. The arcsine is asinine: the analysis of proportions in ecology.

    PubMed

    Warton, David I; Hui, Francis K C

    2011-01-01

    The arcsine square root transformation has long been standard procedure when analyzing proportional data in ecology, with applications in data sets containing binomial and non-binomial response variables. Here, we argue that the arcsine transform should not be used in either circumstance. For binomial data, logistic regression has greater interpretability and higher power than analyses of transformed data. However, it is important to check the data for additional unexplained variation, i.e., overdispersion, and to account for it via the inclusion of random effects in the model if found. For non-binomial data, the arcsine transform is undesirable on the grounds of interpretability, and because it can produce nonsensical predictions. The logit transformation is proposed as an alternative approach to address these issues. Examples are presented in both cases to illustrate these advantages, comparing various methods of analyzing proportions including untransformed, arcsine- and logit-transformed linear models and logistic regression (with or without random effects). Simulations demonstrate that logistic regression usually provides a gain in power over other methods.

  8. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  9. Random mutagenesis of the hyperthermophilic archaeon Pyrococcus furiosus using in vitro mariner transposition and natural transformation.

    PubMed

    Guschinskaya, Natalia; Brunel, Romain; Tourte, Maxime; Lipscomb, Gina L; Adams, Michael W W; Oger, Philippe; Charpentier, Xavier

    2016-11-08

    Transposition mutagenesis is a powerful tool to identify the function of genes, reveal essential genes and generally to unravel the genetic basis of living organisms. However, transposon-mediated mutagenesis has only been successfully applied to a limited number of archaeal species and has never been reported in Thermococcales. Here, we report random insertion mutagenesis in the hyperthermophilic archaeon Pyrococcus furiosus. The strategy takes advantage of the natural transformability of derivatives of the P. furiosus COM1 strain and of in vitro Mariner-based transposition. A transposon bearing a genetic marker is randomly transposed in vitro in genomic DNA that is then used for natural transformation of P. furiosus. A small-scale transposition reaction routinely generates several hundred and up to two thousands transformants. Southern analysis and sequencing showed that the obtained mutants contain a single and random genomic insertion. Polyploidy has been reported in Thermococcales and P. furiosus is suspected of being polyploid. Yet, about half of the mutants obtained on the first selection are homozygous for the transposon insertion. Two rounds of isolation on selective medium were sufficient to obtain gene conversion in initially heterozygous mutants. This transposition mutagenesis strategy will greatly facilitate functional exploration of the Thermococcales genomes.

  10. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  11. Comparison of methods for the detection of gravitational waves from unknown neutron stars

    NASA Astrophysics Data System (ADS)

    Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.

    2016-12-01

    Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.

  12. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  13. The Relationship between Transformational Leadership and Organizational Commitment: The Case for Vocational Teachers in Jordan

    ERIC Educational Resources Information Center

    Khasawneh, Samer; Omari, Aieman; Abu-Tineh, Abdullah M.

    2012-01-01

    The purpose of this article is to determine the relationship between transformational leadership of vocational school principals on vocational teachers-organizational commitment. A random sample of 340 vocational teachers responded to a three-part instrument (the transformational leadership questionnaire, the organizational commitment…

  14. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  15. Shear wave elastography using Wigner-Ville distribution: a simulated multilayer media study.

    PubMed

    Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan

    2016-08-01

    Shear Wave Elastography (SWE) is a quantitative ultrasound-based imaging modality for distinguishing normal and abnormal tissue types by estimating the local viscoelastic properties of the tissue. These properties have been estimated in many studies by propagating ultrasound shear wave within the tissue and estimating parameters such as speed of wave. Vast majority of the proposed techniques are based on the cross-correlation of consecutive ultrasound images. In this study, we propose a new method of wave detection based on time-frequency (TF) analysis of the ultrasound signal. The proposed method is a modified version of the Wigner-Ville Distribution (WVD) technique. The TF components of the wave are detected in a propagating ultrasound wave within a simulated multilayer tissue and the local properties are estimated based on the detected waves. Image processing techniques such as Alternative Sequential Filters (ASF) and Circular Hough Transform (CHT) have been utilized to improve the estimation of TF components. This method has been applied to a simulated data from Wave3000™ software (CyberLogic Inc., New York, NY). This data simulates the propagation of an acoustic radiation force impulse within a two-layer tissue with slightly different viscoelastic properties between the layers. By analyzing the local TF components of the wave, we estimate the longitudinal and shear elasticities and viscosities of the media. This work shows that our proposed method is capable of distinguishing between different layers of a tissue.

  16. Robot acting on moving bodies (RAMBO): Preliminary results

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David

    1989-01-01

    A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  17. Kalman Filter Tracking on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2015-12-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.

  18. Automatic detection of the hippocampal region associated with Alzheimer's disease from microscopic images of mice brain

    NASA Astrophysics Data System (ADS)

    Albaidhani, Tahseen; Hawkes, Cheryl; Jassim, Sabah; Al-Assam, Hisham

    2016-05-01

    The hippocampus is the region of the brain that is primarily associated with memory and spatial navigation. It is one of the first brain regions to be damaged when a person suffers from Alzheimer's disease. Recent research in this field has focussed on the assessment of damage to different blood vessels within the hippocampal region from a high throughput brain microscopic images. The ultimate aim of our research is the creation of an automatic system to count and classify different blood vessels such as capillaries, veins, and arteries in the hippocampus region. This work should provide biologists with efficient and accurate tools in their investigation of the causes of Alzheimer's disease. Locating the boundary of the Region of Interest in the hippocampus from microscopic images of mice brain is the first essential stage towards developing such a system. This task benefits from the variation in colour channels and texture between the two sides of the hippocampus and the boundary region. Accordingly, the developed initial step of our research to locating the hippocampus edge uses a colour-based segmentation of the brain image followed by Hough transforms on the colour channel that isolate the hippocampus region. The output is then used to split the brain image into two sides of the detected section of the boundary: the inside region and the outside region. Experimental results on a sufficiently number of microscopic images demonstrate the effectiveness of the developed solution.

  19. Study a Relation Between the Lascar Volcano Microseimicity and Changes in the Local System of Lineaments, Obtained Using the Landsat 8 Images.

    NASA Astrophysics Data System (ADS)

    Arellano-Baeza, A. A.; Soto-Pinto, C. A.

    2014-12-01

    Over the last decades strong efforts have been made to apply new spaceborn technologies to the study of volcanic activity. Recent studies have shown that the high resolution satellite images can be very useful for tracking of evolution of the stress patterns related to the volcanic activity. It can be done by observing the changes in density and orientation of lineaments extracted from satellite images. A lineament is generally defined as a straight or a somewhat curved feature in the landscape visible in a satellite image as an aligned sequence of pixels of a contrasting intensity compared to the background. The system of lineaments extracted from the satellite images is not identical to the geological lineaments which are usually determined by land-based surveys, nevertheless, it generally reflects the structure of the faults and fractures in the Earth's crust. For this study the lineaments were detected using the ADALGEO software, based on the Hough transform (Soto-Pinto et al, 2013). A temporal sequence of the Landsat 8 multispectral images of the Lascar volcano, located in the North of Chile, was used to study changes in lineament configuration during 2013-2014. It was found that, the number and orientation of lineaments is affected by microseimicity. In particular, it was found that often the density of lineaments decreases with the intensity of microseisms, which could be related to the volcano inflation.

  20. Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor

    PubMed Central

    Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi

    2016-01-01

    Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time. PMID:27898002

  1. Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor.

    PubMed

    Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi

    2016-11-25

    Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.

  2. A fast automatic target detection method for detecting ships in infrared scenes

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  3. Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network.

    PubMed

    Jiang, Jiewei; Liu, Xiyang; Zhang, Kai; Long, Erping; Wang, Liming; Li, Wangting; Liu, Lin; Wang, Shuai; Zhu, Mingmin; Cui, Jiangtao; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Wang, Jinghui; Lin, Haotian

    2017-11-21

    Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.

  4. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network

    PubMed Central

    Zhang, Kai; Long, Erping; Cui, Jiangtao; Zhu, Mingmin; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni

    2017-01-01

    Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model. PMID:28306716

  5. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  6. Semi-automated intra-operative fluoroscopy guidance for osteotomy and external-fixator.

    PubMed

    Lin, Hong; Samchukov, Mikhail L; Birch, John G; Cherkashin, Alexander

    2006-01-01

    This paper outlines a semi-automated intra-operative fluoroscopy guidance and monitoring approach for osteotomy and external-fixator application in orthopedic surgery. Intra-operative Guidance module is one component of the "LegPerfect Suite" developed for assisting the surgical correction of lower extremity angular deformity. The Intra-operative Guidance module utilizes information from the preoperative surgical planning module as a guideline to overlay (register) its bone outline semi-automatically with the bone edge from the real-time fluoroscopic C-Arm X-Ray image in the operating room. In the registration process, scaling factor is obtained automatically through matching a fiducial template in the fluoroscopic image and a marker in the module. A triangle metal plate, placed on the operating table is used as fiducial template. The area of template image within the viewing area of the fluoroscopy machine is obtained by the image processing techniques such as edge detection and Hough transformation to extract the template from other objects in the fluoroscopy image. The area of fiducial template from fluoroscopic image is then compared with the area of the marker from the planning so as to obtain the scaling factor. After the scaling factor is obtained, the user can use simple operations by mouse to shift and rotate the preoperative planning to overlay the bone outline from planning with the bone edge from fluoroscopy image. In this way osteotomy levels and external fixator positioning on the limb can guided by the computerized preoperative plan.

  7. The Relationship between Transformational Leadership and Job Satisfaction: The Case of Government Secondary School Teachers in Ethiopia

    ERIC Educational Resources Information Center

    Tesfaw, Tadele Akalu

    2014-01-01

    The purpose of this study is to determine the relationship between transformational leadership of government secondary school principals and teachers' job satisfaction. A random sample of 320 teachers responded to a three-part instrument (the transformational leadership questionnaire, the teachers' job satisfaction questionnaire and a demographic…

  8. The random continued fraction transformation

    NASA Astrophysics Data System (ADS)

    Kalle, Charlene; Kempton, Tom; Verbitskiy, Evgeny

    2017-03-01

    We introduce a random dynamical system related to continued fraction expansions. It uses random combinations of the Gauss map and the Rényi (or backwards) continued fraction map. We explore the continued fraction expansions that this system produces, as well as the dynamical properties of the system.

  9. Efficient transformation and expression of gfp gene in Valsa mali var. mali.

    PubMed

    Chen, Liang; Sun, Gengwu; Wu, Shujing; Liu, Huixiang; Wang, Hongkai

    2015-01-01

    Valsa mali var. mali, the causal agent of valsa canker of apple, causes great loss of apple production in apple producing regions. The pathogenic mechanism of the pathogen has not been studied extensively, thus a suitable gene marker for pathogenic invasion analysis and a random insertion of T-DNA for mutants are desirable. In this paper, we reported the construction of a binary vector pKO1-HPH containing a positive selective gene hygromycin phosphotransferase (hph), a reporter gene gfp conferring green fluorescent protein, and an efficient protocol for V. mali var. mali transformation mediated by Agrobacterium tumefaciens. A transformation efficiency up to about 75 transformants per 10(5) conidia was achieved when co-cultivation of V. mali var. mali and A. tumefaciens for 48 h in A. tumefaciens inductive medium agar plates. The insertions of hph gene and gfp gene into V. mali var. mali genome verified by polymerase chain reaction and southern blot analysis showed that 10 randomly-selected transformants exhibited a single, unique hybridization pattern. This is the first report of A. tumefaciens-mediated transformation of V. mali var mali carrying a 'reporter' gfp gene that stably and efficiently expressed in the transformed V. mali var. mali species.

  10. Random deflections of a string on an elastic foundation.

    NASA Technical Reports Server (NTRS)

    Sanders, J. L., Jr.

    1972-01-01

    The paper is concerned with the problem of a taut string on a random elastic foundation subjected to random loads. The boundary value problem is transformed into an initial value problem by the method of invariant imbedding. Fokker-Planck equations for the random initial value problem are formulated and solved in some special cases. The analysis leads to a complete characterization of the random deflection function.

  11. A fast Karhunen-Loeve transform for a class of random processes

    NASA Technical Reports Server (NTRS)

    Jain, A. K.

    1976-01-01

    It is shown that for a class of finite first-order Markov signals, the Karhunen-Loeve (KL) transform for data compression is a set of periodic sine functions if the boundary values of the signal are fixed or known. These sine functions are shown to be related to the Fourier transform so that a fast Fourier transform algorithm can be used to implement the KL transform. Extension to two dimensions with reference to images with separable contravariance function is shown.

  12. Efficient Text Encryption and Hiding with Double-Random Phase-Encoding

    PubMed Central

    Sang, Jun; Ling, Shenggui; Alam, Mohammad S.

    2012-01-01

    In this paper, a double-random phase-encoding technique-based text encryption and hiding method is proposed. First, the secret text is transformed into a 2-dimensional array and the higher bits of the elements in the transformed array are used to store the bit stream of the secret text, while the lower bits are filled with specific values. Then, the transformed array is encoded with double-random phase-encoding technique. Finally, the encoded array is superimposed on an expanded host image to obtain the image embedded with hidden data. The performance of the proposed technique, including the hiding capacity, the recovery accuracy of the secret text, and the quality of the image embedded with hidden data, is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient. By using optical information processing techniques, the proposed method has been found to significantly improve the security of text information transmission, while ensuring hiding capacity at a prescribed level. PMID:23202003

  13. A misleading review of response bias: comment on McGrath, Mitchell, Kim, and Hough (2010).

    PubMed

    Rohling, Martin L; Larrabee, Glenn J; Greiffenstein, Manfred F; Ben-Porath, Yossef S; Lees-Haley, Paul; Green, Paul; Greve, Kevin W

    2011-07-01

    In the May 2010 issue of Psychological Bulletin, R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such use in everyday clinical practice. Furthermore, they claimed that despite 100 years of research into the use of response bias indicators, "a sufficient justification for [their] use… in applied settings remains elusive" (p. 450). We disagree with McGrath et al.'s conclusions. In fact, we assert that the relevant and voluminous literature that has addressed the issues of response bias substantiates validity of these indicators. In addition, we believe that response bias measures should be used in clinical and research settings on a regular basis. Finally, the empirical evidence for the use of response bias measures is strongest in clinical neuropsychology. We argue that McGrath et al.'s erroneous perspective on response bias measures is a result of 3 errors in their research methodology: (a) inclusion criteria for relevant studies that are too narrow; (b) errors in interpreting results of the empirical research they did include; (c) evidence of a confirmatory bias in selectively citing the literature, as evidence of moderation appears to have been overlooked. Finally, their acknowledging experts in the field who might have highlighted these errors prior to publication may have prevented critiques during the review process.

  14. Detección automática de NEOs en imágenes CCD utilizando la transformada de Hough

    NASA Astrophysics Data System (ADS)

    Ruétalo, M.; Tancredi, G.

    El interés y la dedicación por los objetos que se acercan a la órbita de la Tierra (NEOs) ha aumentado considerablemente en los últimos años, tanto que se han iniciado varias campañas de búsqueda sistemática para aumentar la población identificada de éstos. El uso de placas fotográficas e identificación visual está siendo sustituído, progresivamente, por el uso de cámaras CCD y paquetes de detección automática de los objetos en las imágenes digitales. Una parte muy importante para la implementación exitosa de un programa automatizado de detección de este tipo es el desarrollo de algoritmos capaces de identificar objetos de baja relación señal-ruido y con requerimientos computacionales no elevados. En el presente trabajo proponemos la utilización de la transformada de Hough (utilizada en algunas áreas de visión artificial) para detectar automáticamente trazas, aproximadamente rectilíneas y de baja relación señal-ruido, en imágenes CCD. Desarrollamos una primera implementación de un algoritmo basado en ésta y lo probamos con una serie de imágenes reales conteniendo trazas con picos de señales de entre ~1 σ y ~3 σ por encima del nivel del ruido de fondo. El algoritmo detecta, sin inconvenientes, la mayoría de los casos y en tiempos razonablemente adecuados.

  15. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  16. Spatio-temporal modelling of wind speed variations and extremes in the Caribbean and the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Rychlik, Igor; Mao, Wengang

    2018-02-01

    The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.

  17. A novel attack method about double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  18. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  19. Implicit transfer of spatial structure in visuomotor sequence learning.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2014-11-01

    Implicit learning and transfer in sequence learning are essential in daily life. Here, we investigated the implicit transfer of visuomotor sequences following a spatial transformation. In the two experiments, participants used trial and error to learn a sequence consisting of several button presses, known as the m×n task (Hikosaka et al., 1995). After this learning session, participants learned another sequence in which the button configuration was spatially transformed in one of the following ways: mirrored, rotated, and random arrangement. Our results showed that even when participants were unaware of the transformation rules, accuracy of transfer session in the mirrored and rotated groups was higher than that in the random group (i.e., implicit transfer occurred). Both those who noticed the transformation rules and those who did not (i.e., explicit and implicit transfer instances, respectively) showed faster performance in the mirrored sequences than in the rotated sequences. Taken together, the present results suggest that people can use their implicit visuomotor knowledge to spatially transform sequences and that implicit transfers are modulated by a transformation cost, similar to that in explicit transfer. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Removal of Stationary Sinusoidal Noise from Random Vibration Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian; Cap, Jerome S.

    In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less

  1. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  2. Privacy-preserving outlier detection through random nonlinear data distortion.

    PubMed

    Bhaduri, Kanishka; Stefanski, Mark D; Srivastava, Ashok N

    2011-02-01

    Consider a scenario in which the data owner has some private or sensitive data and wants a data miner to access them for studying important patterns without revealing the sensitive information. Privacy-preserving data mining aims to solve this problem by randomly transforming the data prior to their release to the data miners. Previous works only considered the case of linear data perturbations--additive, multiplicative, or a combination of both--for studying the usefulness of the perturbed output. In this paper, we discuss nonlinear data distortion using potentially nonlinear random data transformation and show how it can be useful for privacy-preserving anomaly detection from sensitive data sets. We develop bounds on the expected accuracy of the nonlinear distortion and also quantify privacy by using standard definitions. The highlight of this approach is to allow a user to control the amount of privacy by varying the degree of nonlinearity. We show how our general transformation can be used for anomaly detection in practice for two specific problem instances: a linear model and a popular nonlinear model using the sigmoid function. We also analyze the proposed nonlinear transformation in full generality and then show that, for specific cases, it is distance preserving. A main contribution of this paper is the discussion between the invertibility of a transformation and privacy preservation and the application of these techniques to outlier detection. The experiments conducted on real-life data sets demonstrate the effectiveness of the approach.

  3. Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction

    NASA Astrophysics Data System (ADS)

    Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo

    2014-12-01

    To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.

  4. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  5. Using Expected Value to Introduce the Laplace Transform

    ERIC Educational Resources Information Center

    Lutzer, Carl V.

    2015-01-01

    We propose an introduction to the Laplace transform in which Riemann sums are used to approximate the expected net change in a function, assuming that it quantifies a process that can terminate at random. We assume only a basic understanding of probability.

  6. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  7. Pigmented skin lesion detection using random forest and wavelet-based texture

    NASA Astrophysics Data System (ADS)

    Hu, Ping; Yang, Tie-jun

    2016-10-01

    The incidence of cutaneous malignant melanoma, a disease of worldwide distribution and is the deadliest form of skin cancer, has been rapidly increasing over the last few decades. Because advanced cutaneous melanoma is still incurable, early detection is an important step toward a reduction in mortality. Dermoscopy photographs are commonly used in melanoma diagnosis and can capture detailed features of a lesion. A great variability exists in the visual appearance of pigmented skin lesions. Therefore, in order to minimize the diagnostic errors that result from the difficulty and subjectivity of visual interpretation, an automatic detection approach is required. The objectives of this paper were to propose a hybrid method using random forest and Gabor wavelet transformation to accurately differentiate which part belong to lesion area and the other is not in a dermoscopy photographs and analyze segmentation accuracy. A random forest classifier consisting of a set of decision trees was used for classification. Gabor wavelets transformation are the mathematical model of visual cortical cells of mammalian brain and an image can be decomposed into multiple scales and multiple orientations by using it. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain. Texture features based on Gabor wavelets transformation are found by the Gabor filtered image. Experiment results indicate the following: (1) the proposed algorithm based on random forest outperformed the-state-of-the-art in pigmented skin lesions detection (2) and the inclusion of Gabor wavelet transformation based texture features improved segmentation accuracy significantly.

  8. Computer-Based Linguistic Analysis.

    ERIC Educational Resources Information Center

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  9. Emergence of biological organization through thermodynamic inversion.

    PubMed

    Kompanichenko, Vladimir

    2014-01-01

    Biological organization arises under thermodynamic inversion in prebiotic systems that provide the prevalence of free energy and information contribution over the entropy contribution. The inversion might occur under specific far-from-equilibrium conditions in prebiotic systems oscillating around the bifurcation point. At the inversion moment, (physical) information characteristic of non-biological systems acquires the new features: functionality, purposefulness, and control over the life processes, which transform it into biological information. Random sequences of amino acids and nucleotides, spontaneously synthesized in the prebiotic microsystem, in the primary living unit (probiont) re-assemble into functional sequences, involved into bioinformation circulation through nucleoprotein interaction, resulted in the genetic code emergence. According to the proposed concept, oscillating three-dimensional prebiotic microsystems transformed into probionts in the changeable hydrothermal medium of the early Earth. The inversion concept states that spontaneous (accidental, random) transformations in prebiotic systems cannot produce life; it is only non-spontaneous (perspective, purposeful) transformations, which are the result of thermodynamic inversion, that lead to the negentropy conversion of prebiotic systems into initial living units.

  10. Upper ankle joint space detection on low contrast intraoperative fluoroscopic C-arm projections

    NASA Astrophysics Data System (ADS)

    Thomas, Sarina; Schnetzke, Marc; Brehler, Michael; Swartman, Benedict; Vetter, Sven; Franke, Jochen; Grützner, Paul A.; Meinzer, Hans-Peter; Nolden, Marco

    2017-03-01

    Intraoperative mobile C-arm fluoroscopy is widely used for interventional verification in trauma surgery, high flexibility combined with low cost being the main advantages of the method. However, the lack of global device-to- patient orientation is challenging, when comparing the acquired data to other intrapatient datasets. In upper ankle joint fracture reduction accompanied with an unstable syndesmosis, a comparison to the unfractured contralateral site is helpful for verification of the reduction result. To reduce dose and operation time, our approach aims at the comparison of single projections of the unfractured ankle with volumetric images of the reduced fracture. For precise assessment, a pre-alignment of both datasets is a crucial step. We propose a contour extraction pipeline to estimate the joint space location for a prealignment of fluoroscopic C-arm projections containing the upper ankle joint. A quadtree-based hierarchical variance comparison extracts potential feature points and a Hough transform is applied to identify bone shaft lines together with the tibiotalar joint space. By using this information we can define the coarse orientation of the projections independent from the ankle pose during acquisition in order to align those images to the volume of the fractured ankle. The proposed method was evaluated on thirteen cadaveric datasets consisting of 100 projections each with manually adjusted image planes by three trauma surgeons. The results show that the method can be used to detect the joint space orientation. The correlation between angle deviation and anatomical projection direction gives valuable input on the acquisition direction for future clinical experiments.

  11. Segmentation of ribs in digital chest radiographs

    NASA Astrophysics Data System (ADS)

    Cong, Lin; Guo, Wei; Li, Qiang

    2016-03-01

    Ribs and clavicles in posterior-anterior (PA) digital chest radiographs often overlap with lung abnormalities such as nodules, and cause missing of these abnormalities, it is therefore necessary to remove or reduce the ribs in chest radiographs. The purpose of this study was to develop a fully automated algorithm to segment ribs within lung area in digital radiography (DR) for removal of the ribs. The rib segmentation algorithm consists of three steps. Firstly, a radiograph was pre-processed for contrast adjustment and noise removal; second, generalized Hough transform was employed to localize the lower boundary of the ribs. In the third step, a novel bilateral dynamic programming algorithm was used to accurately segment the upper and lower boundaries of ribs simultaneously. The width of the ribs and the smoothness of the rib boundaries were incorporated in the cost function of the bilateral dynamic programming for obtaining consistent results for the upper and lower boundaries. Our database consisted of 93 DR images, including, respectively, 23 and 70 images acquired with a DR system from Shanghai United-Imaging Healthcare Co. and from GE Healthcare Co. The rib localization algorithm achieved a sensitivity of 98.2% with 0.1 false positives per image. The accuracy of the detected ribs was further evaluated subjectively in 3 levels: "1", good; "2", acceptable; "3", poor. The percentages of good, acceptable, and poor segmentation results were 91.1%, 7.2%, and 1.7%, respectively. Our algorithm can obtain good segmentation results for ribs in chest radiography and would be useful for rib reduction in our future study.

  12. A new morphology algorithm for shoreline extraction from DEM data

    NASA Astrophysics Data System (ADS)

    Yousef, Amr H.; Iftekharuddin, Khan; Karim, Mohammad

    2013-03-01

    Digital elevation models (DEMs) are a digital representation of elevations at regularly spaced points. They provide an accurate tool to extract the shoreline profiles. One of the emerging sources of creating them is light detection and ranging (LiDAR) that can capture a highly dense cloud points with high resolution that can reach 15 cm and 100 cm in the vertical and horizontal directions respectively in short periods of time. In this paper we present a multi-step morphological algorithm to extract shorelines locations from the DEM data and a predefined tidal datum. Unlike similar approaches, it utilizes Lowess nonparametric regression to estimate the missing values within the DEM file. Also, it will detect and eliminate the outliers and errors that result from waves, ships, etc by means of anomality test with neighborhood constrains. Because, there might be some significant broken regions such as branches and islands, it utilizes a constrained morphological open and close to reduce these artifacts that can affect the extracted shorelines. In addition, it eliminates docks, bridges and fishing piers along the extracted shorelines by means of Hough transform. Based on a specific tidal datum, the algorithm will segment the DEM data into water and land objects. Without sacrificing the accuracy and the spatial details of the extracted boundaries, the algorithm should smooth and extract the shoreline profiles by tracing the boundary pixels between the land and the water segments. For given tidal values, we qualitatively assess the visual quality of the extracted shorelines by superimposing them on the available aerial photographs.

  13. Development of a software for monitoring of seismic activity through the analysis of satellite images

    NASA Astrophysics Data System (ADS)

    Soto-Pinto, C.; Poblete, A.; Arellano-Baeza, A. A.; Sanchez, G.

    2010-12-01

    A software for extraction and analysis of the lineaments has been developed and applied for the tracking of the accumulation/relaxation of stress in the Earth’s crust due to seismic and volcanic activity. A lineament is a straight or a somewhat curved feature in a satellite image, which reflects, at least partially, presence of faults in the crust. The technique of lineament extraction is based on the application of directional filters and Hough transform. The software has been checked for several earthquakes occurred in the Pacific coast of the South America with the magnitude > 4 Mw, analyzing temporal sequences of the ASTER/TERRA multispectral satellite images for the regions around an epicenter. All events were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only. It was found that the number and orientation of lineaments changes significantly about one month before an earthquake approximately, and a few months later the system returns to its initial state. This effect increases with the earthquake magnitude. It also was shown that the behavior of lineaments associated to the volcano seismic activity is opposite to that obtained previously for earthquakes. This discrepancy can be explained assuming that in the last case the main reason of earthquakes is compression and accumulation of strength in the Earth’s crust due to subduction of tectonic plates, whereas in the first case we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion.

  14. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  15. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  16. Robust identification and localization of intramedullary nail holes for distal locking using CBCT: a simulation study.

    PubMed

    Kamarianakis, Z; Buliev, I; Pallikarakis, N

    2011-05-01

    Closed intramedullary nailing is a common technique for treatment of femur and tibia fractures. The most challenging step in this procedure is the precise placement of the lateral screws that stabilize the fragmented bone. The present work concerns the development and the evaluation of a method to accurately identify in the 3D space the axes of the nail hole canals. A limited number of projection images are acquired around the leg with the help of a C-arm. On two of them, the locking hole entries are interactively selected and a rough localization of the hole axes is performed. Perpendicularly to one of them, cone-beam computed tomography (CBCT) reconstructions are produced. The accurate identification and localization of the hole axes are done by an identification of the centers of the nail holes on the tomograms and a further 3D linear regression through principal component analysis (PCA). Various feature-based approaches (RANSAC, least-square fitting, Hough transform) have been compared for best matching the contours and the centers of the holes on the tomograms. The robustness of the suggested method was investigated using simulations. Programming is done in Matlab and C++. Results obtained on synthetic data confirm very good localization accuracy - mean translational error of 0.14 mm (std=0.08 mm) and mean angular error of 0.84° (std=0.35°) at no radiation excess. Successful localization can be further used to guide a surgeon or a robot for correct drilling the bone along the nail openings. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Analysis and Application of Lineaments Extraction Using GF-1 Satellite Images in Loess Covered

    NASA Astrophysics Data System (ADS)

    Han, L.; Liu, Z.; Zhao, Z.; Ning, Y.

    2018-04-01

    Faults, folds and other tectonics regions belong to the weak areas of geology, will form linear geomorphology as a result of erosion, which appears as lineaments on the earth surface. Lineaments control the distribution of regional formation, groundwater, and geothermal, etc., so it is an important indicator for the evaluation of the strength and stability of the geological structure. The current algorithms mostly are artificial visual interpretation and computer semi-automatic extraction, not only time-consuming, but labour-intensive. It is difficult to guarantee the accuracy due to the dependence on the expert's knowledge, experience, and the computer hardware and software. Therefore, an integrated algorithm is proposed based on the GF-1 satellite image data, taking the loess area in the northern part of Jinlinghe basin as an example. Firstly, the best bands with 4-3-2 composition is chosen using optimum index factor (OIF). Secondly, line edge is highlighted by Gaussian high-pass filter and tensor voting. Finally, the Hough Transform is used to detect the geologic lineaments. Thematic maps of geological structure in this area are mapped through the extraction of lineaments. The experimental results show that, influenced by the northern margin of Qinling Mountains and the declined Weihe Basin, the lineaments are mostly distributed over the terrain lines, and mainly in the NW, NE, NNE, and ENE directions. It provided a reliable basis for analysing tectonic stress trend because of the agreement with the existing regional geological survey. The algorithm is more practical and has higher robustness, less disturbed by human factors.

  18. Fully automated calculation of cardiothoracic ratio in digital chest radiographs

    NASA Astrophysics Data System (ADS)

    Cong, Lin; Jiang, Luan; Chen, Gang; Li, Qiang

    2017-03-01

    The calculation of Cardiothoracic Ratio (CTR) in digital chest radiographs would be useful for cardiac anomaly assessment and heart enlargement related disease indication. The purpose of this study was to develop and evaluate a fully automated scheme for calculation of CTR in digital chest radiographs. Our automated method consisted of three steps, i.e., lung region localization, lung segmentation, and CTR calculation. We manually annotated the lung boundary with 84 points in 100 digital chest radiographs, and calculated an average lung model for the subsequent work. Firstly, in order to localize the lung region, generalized Hough transform was employed to identify the upper, lower, and outer boundaries of lung by use of Sobel gradient information. The average lung model was aligned to the localized lung region to obtain the initial lung outline. Secondly, we separately applied dynamic programming method to detect the upper, lower, outer and inner boundaries of lungs, and then linked the four boundaries to segment the lungs. Based on the identified outer boundaries of left lung and right lung, we corrected the center and the declination of the original radiography. Finally, CTR was calculated as a ratio of the transverse diameter of the heart to the internal diameter of the chest, based on the segmented lungs. The preliminary results on 106 digital chest radiographs showed that the proposed method could obtain accurate segmentation of lung based on subjective observation, and achieved sensitivity of 88.9% (40 of 45 abnormalities), and specificity of 100% (i.e. 61 of 61 normal) for the identification of heart enlargements.

  19. Separation of overlapping dental arch objects using digital records of illuminated plaster casts.

    PubMed

    Yadollahi, Mohammadreza; Procházka, Aleš; Kašparová, Magdaléna; Vyšata, Oldřich; Mařík, Vladimír

    2015-07-11

    Plaster casts of individual patients are important for orthodontic specialists during the treatment process and their analysis is still a standard diagnostical tool. But the growing capabilities of information technology enable their replacement by digital models obtained by complex scanning systems. This paper presents the possibility of using a digital camera as a simple instrument to obtain the set of digital images for analysis and evaluation of the treatment using appropriate mathematical tools of image processing. The methods studied in this paper include the segmentation of overlapping dental bodies and the use of different illumination sources to increase the reliability of the separation process. The circular Hough transform, region growing with multiple seed points, and the convex hull detection method are applied to the segmentation of orthodontic plaster cast images to identify dental arch objects and their sizes. The proposed algorithm presents the methodology of improving the accuracy of segmentation of dental arch components using combined illumination sources. Dental arch parameters and distances between the canines and premolars for different segmentation methods were used as a measure to compare the results obtained. A new method of segmentation of overlapping dental arch components using digital records of illuminated plaster casts provides information with the precision required for orthodontic treatment. The distance between corresponding teeth was evaluated with a mean error of 1.38% and the Dice similarity coefficient of the evaluated dental bodies boundaries reached 0.9436 with a false positive rate [Formula: see text] and false negative rate [Formula: see text].

  20. Fully automatic detection of salient features in 3-d transesophageal images.

    PubMed

    Curiale, Ariel H; Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Ren, Ben; Aja-Fernández, Santiago; Bosch, Johan G

    2014-12-01

    Most automated segmentation approaches to the mitral valve and left ventricle in 3-D echocardiography require a manual initialization. In this article, we propose a fully automatic scheme to initialize a multicavity segmentation approach in 3-D transesophageal echocardiography by detecting the left ventricle long axis, the mitral valve and the aortic valve location. Our approach uses a probabilistic and structural tissue classification to find structures such as the mitral and aortic valves; the Hough transform for circles to find the center of the left ventricle; and multidimensional dynamic programming to find the best position for the left ventricle long axis. For accuracy and agreement assessment, the proposed method was evaluated in 19 patients with respect to manual landmarks and as initialization of a multicavity segmentation approach for the left ventricle, the right ventricle, the left atrium, the right atrium and the aorta. The segmentation results revealed no statistically significant differences between manual and automated initialization in a paired t-test (p > 0.05). Additionally, small biases between manual and automated initialization were detected in the Bland-Altman analysis (bias, variance) for the left ventricle (-0.04, 0.10); right ventricle (-0.07, 0.18); left atrium (-0.01, 0.03); right atrium (-0.04, 0.13); and aorta (-0.05, 0.14). These results indicate that the proposed approach provides robust and accurate detection to initialize a multicavity segmentation approach without any user interaction. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  1. A General Relativistic Null Hypothesis Test with Event Horizon Telescope Observations of the Black Hole Shadow in Sgr A*

    NASA Astrophysics Data System (ADS)

    Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan; Marrone, Daniel P.

    2015-12-01

    The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc2, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to show that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.

  2. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  3. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Box–Cox Transformation and Random Regression Models for Fecal egg Count Data

    PubMed Central

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.

    2012-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406

  5. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    PubMed

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  6. Single and combined effects of peppermint and thyme essential oils on productive performance, egg quality traits, and blood parameters of laying hens reared under cold stress condition (6.8 ± 3 °C)

    NASA Astrophysics Data System (ADS)

    Akbari, Mohsen; Torki, Mehran; Kaviani, Keyomars

    2016-03-01

    This study was conducted to evaluate the effects of adding peppermint essential oil (PEO), thyme essential oil (TEO), or their combination to diet on productive performance, egg quality traits, and blood parameters of laying hens reared under cold stress condition (6.8 ± 3 °C). Feed intake (FI), feed conversion ratio (FCR), egg weight (EW), egg production (EP), and egg mass (EM) were evaluated during the 56-day trial period using 120 Lohmann LSL-lite laying hens. Significant interactions between PEO and TEO on FCR, EP, and EM were observed ( P < 0.05). The EP and EM increased, whereas FCR decreased ( P < 0.05) in the hens fed the diets supplemented by the combined form of PEO and TEO compared to those fed the basal diet. Also, increased EW and FI were observed in the laying hens fed the diet added by PEO compared to the birds fed the basal diet. There were significant interactions between PEO and TEO on the serum level of cholesterol, shell thickness, and Hough unit of egg ( P < 0.05), so that serum content of cholesterol decreased, but egg shell thickness and Hough unit increased in the hens fed the diet supplemented by the combined form of PEO and TEO compared to those fed the basal diet. From the results of the present experiment, it can be concluded that diet supplementation by combined form of PEO and TEO could have beneficial effects on performance parameters of hens reared under cold stress condition.

  7. Video encryption using chaotic masks in joint transform correlator

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2015-03-01

    A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.

  8. Random T-DNA mutagenesis identifies a Cu-Zn-superoxide dismutase gene as a virulence factor of Sclerotinia sclerotiorum

    USDA-ARS?s Scientific Manuscript database

    Agrobacterium-mediated transformation (AMT) was used to identify potential virulence factors in Sclerotinia sclerotiorum. Screening AMT transformants identified two mutants showing significantly reduced virulence. The mutants showed similar growth rate, colony morphology, and sclerotial and oxalate ...

  9. Proceedings of the Conference on the Design of Experiments in Army Research Development and Testing (26th) Held at New Mexico State University, Las Cruces, New Mexico on 22-24 October 1980.

    DTIC Science & Technology

    1981-06-01

    normality and several types of nonnormality. Overall the rank transformation procedure seems to be the best. The Fisher’s LSD multiple comparisons procedure...the rank transformation procedure appears to maintain power better than Fisher’s LSD or the randomization proce- dures. The conclusion of this study...best. The Fisher’s LSD multiple comparisons procedure in the one way and two way layouts iv compared with a randomization procedure and with the same

  10. Control and design heat flux bending in thermal devices with transformation optics.

    PubMed

    Xu, Guoqiang; Zhang, Haochun; Jin, Yan; Li, Sen; Li, Yao

    2017-04-17

    We propose a fundamental latent function of control heat transfer and heat flux density vectors at random positions on thermal materials by applying transformation optics. The expressions for heat flux bending are obtained, and the factors influencing them are investigated in both 2D and 3D cloaking schemes. Under certain conditions, more than one degree of freedom of heat flux bending exists corresponding to the temperature gradients of the 3D domain. The heat flux path can be controlled in random space based on the geometrical azimuths, radial positions, and thermal conductivity ratios of the selected materials.

  11. Chaotic oscillations and noise transformations in a simple dissipative system with delayed feedback

    NASA Astrophysics Data System (ADS)

    Zverev, V. V.; Rubinstein, B. Ya.

    1991-04-01

    We analyze the statistical behavior of signals in nonlinear circuits with delayed feedback in the presence of external Markovian noise. For the special class of circuits with intense phase mixing we develop an approach for the computation of the probability distributions and multitime correlation functions based on the random phase approximation. Both Gaussian and Kubo-Andersen models of external noise statistics are analyzed and the existence of the stationary (asymptotic) random process in the long-time limit is shown. We demonstrate that a nonlinear system with chaotic behavior becomes a noise amplifier with specific statistical transformation properties.

  12. The Effect of Rician Fading and Partial-Band Interference on Noise- Normalized Fast Frequency-Hopped MFSK Receivers

    DTIC Science & Technology

    1994-03-01

    FSK 16. PmCI coot 17. SECURITY CLASSWsAI1OW IL SICUURW CLA$SIICATION SECURITY CLASSIICATION 20. LIMIATION Of ABSTRACT CW REPOW ? OF TiNS PAU OF ...hop k of a symbol when partial-band interference is present is obtained from (11) and the linear transformation of random variables given by (3) as...from (13) and the transformation of random variables indicated by (9) as [16] fzwjm(zwik) = f cTak!X. (Xmk, = ZmkOkI17) f~(0,kdo . -- (,.U(zk’ )fE2

  13. SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere

    NASA Astrophysics Data System (ADS)

    Creasey, Peter; Lang, Annika

    2018-04-01

    SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.

  14. TRIAC II. A MatLab code for track measurements from SSNT detectors

    NASA Astrophysics Data System (ADS)

    Patiris, D. L.; Blekas, K.; Ioannides, K. G.

    2007-08-01

    A computer program named TRIAC II written in MATLAB and running with a friendly GUI has been developed for recognition and parameters measurements of particles' tracks from images of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of tracks and depending on the current working mode classifies them according to their radii (Mode I—circular tracks) or their axis (Mode II—elliptical tracks), their mean intensity value (brightness) and their orientation. Images of the detectors' surfaces are input to the code, which generates text files as output, including the number of counted tracks with the associated track parameters. Hough transform techniques are used for the estimation of the number of tracks and their parameters, providing results even in cases of overlapping tracks. Finally, it is possible for the user to obtain informative histograms as well as output files for each image and/or group of images. Program summaryTitle of program:TRIAC II Catalogue identifier:ADZC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZC_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Pentium III, 600 MHz Installations: MATLAB 7.0 Operating system under which the program has been tested: Windows XP Programming language used:MATLAB Memory required to execute with typical data:256 MB No. of bits in a word:32 No. of processors used:one Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:25 964 No. of bytes in distributed program including test data, etc.: 4 354 510 Distribution format:tar.gz Additional comments: This program requires the MatLab Statistical toolbox and the Image Processing Toolbox to be installed. Nature of physical problem: Following the passage of a charged particle (protons and heavier) through a Solid State Nuclear Track Detector (SSNTD), a damage region is created, usually named latent track. After the chemical etching of the detectors in aqueous NaOH or KOH solutions, latent tracks can be sufficiently enlarged (with diameters of 1 μm or more) to become visible under an optical microscope. Using the appropriate apparatus, one can record images of the SSNTD's surface. The shapes of the particle's tracks are strongly dependent on their charge, energy and the angle of incidence. Generally, they have elliptical shapes and in the special case of vertical incidence, they are circular. The manual counting of tracks is a tedious and time-consuming task. An automatic system is needed to speed up the process and to increase the accuracy of the results. Method of solution: TRIAC II is based on a segmentation method that groups image pixels according to their intensity value (brightness) in a number of grey level groups. After the segmentation of pixels, the program recognizes and separates the track from the background, subsequently performing image morphology, where oversized objects or objects smaller than a threshold value are removed. Finally, using the appropriate Hough transform technique, the program counts the tracks, even those which overlap and classifies them according to their shape parameters and brightness. Typical running time: The analysis of an image with a PC (Intel Pentium III processor running at 600 MHz) requires 2 to 10 minutes, depending on the number of observed tracks and the digital resolution of the image. Unusual features of the program: This program has been tested with images of CR-39 detectors exposed to alpha particles. Also, in low contrast images with few or small tracks, background pixels can be recognized as track pixels. To avoid this problem the brightness of the background pixels should be sufficiently higher than that of the track pixels.

  15. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    PubMed

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  16. Privacy Protection by Matrix Transformation

    NASA Astrophysics Data System (ADS)

    Yang, Weijia

    Privacy preserving is indispensable in data mining. In this paper, we present a novel clustering method for distributed multi-party data sets using orthogonal transformation and data randomization techniques. Our method can not only protect privacy in face of collusion, but also achieve a higher level of accuracy compared to the existing methods.

  17. Prevention Service System Transformation Using "Communities That Care"

    ERIC Educational Resources Information Center

    Brown, Eric C.; Hawkins, J. David; Arthur, Michael W.; Briney, John S.; Fagan, Abigail A.

    2011-01-01

    This study examines prevention system transformation as part of a community-randomized controlled trial of Communities That Care (CTC). Using data from surveys of community leaders, we examine differences between CTC and control communities 4.5 years after CTC implementation. Significantly higher levels of adopting a science-based approach to…

  18. An algorithm to compute the sequency ordered Walsh transform

    NASA Technical Reports Server (NTRS)

    Larsen, H.

    1976-01-01

    A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.

  19. Simulation of foulant bioparticle topography based on Gaussian process and its implications for interface behavior research

    NASA Astrophysics Data System (ADS)

    Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang

    2018-03-01

    Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.

  20. A comparison of three random effects approaches to analyze repeated bounded outcome scores with an application in a stroke revalidation study.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2008-12-30

    Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.

  1. Liquid Chromatographic Analysis of the Free Sugars in Sweet Corn: a Method Indicative of Maturity and of Quality Changes Related to Processing Techniques

    DTIC Science & Technology

    1977-07-01

    F. Flora and R. C. Wiley, J . Food Scl., 39, 770 (1974). 2G. Rumpf, J . Mawson and H. Hansen, J . Sci. Food Agric., 23, 193 (1972). L Hough and J . K. N...Clamp, T . Bhatti and R. E. Chambers, Methods Biochem. Anal., 19, 229 (1971). 11 J . M. Richey, H. G. Richey, Jr. and R. Schraer, Analyt. Biochem., 9...C W Culpepper and C, A. Magoon, J . Agr. Res., 28, 403 (1924). -0. M Doty, G. M. Smith, J . R. Roach and J . T . Sullivan, Indiana Agr. Exp. Sta. Bull

  2. Remediating Non-Positive Definite State Covariances for Collision Probability Estimation

    NASA Technical Reports Server (NTRS)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis team estimates the probability of collision (Pc) for a set of Earth-orbiting satellites. The Pc estimation software processes satellite position+velocity states and their associated covariance matri-ces. On occasion, the software encounters non-positive definite (NPD) state co-variances, which can adversely affect or prevent the Pc estimation process. Inter-polation inaccuracies appear to account for the majority of such covariances, alt-hough other mechanisms contribute also. This paper investigates the origin of NPD state covariance matrices, three different methods for remediating these co-variances when and if necessary, and the associated effects on the Pc estimation process.

  3. A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport.

    PubMed

    Zhang, Kejiang; Achari, Gopal; Li, Hua

    2009-11-03

    Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.

  4. Devil's vortex Fresnel lens phase masks on an asymmetric cryptosystem based on phase-truncation in gyrator wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-06-01

    An asymmetric scheme has been proposed for optical double images encryption in the gyrator wavelet transform (GWT) domain. Grayscale and binary images are encrypted separately using double random phase encoding (DRPE) in the GWT domain. Phase masks based on devil's vortex Fresnel Lens (DVFLs) and random phase masks (RPMs) are jointly used in spatial as well as in the Fourier plane. The images to be encrypted are first gyrator transformed and then single-level discrete wavelet transformed (DWT) to decompose LL , HL , LH and HH matrices of approximation, horizontal, vertical and diagonal coefficients. The resulting coefficients from the DWT are multiplied by other RPMs and the results are applied to inverse discrete wavelet transform (IDWT) for obtaining the encrypted images. The images are recovered from their corresponding encrypted images by using the correct parameters of the GWT, DVFL and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The mother wavelet family, DVFL and gyrator transform orders associated with the GWT are extra keys that cause difficulty to an attacker. Thus, the scheme is more secure as compared to conventional techniques. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between recovered and the original images. The sensitivity of the proposed scheme is verified with encryption parameters and noise attacks.

  5. Kevlar: Transitioning Helix from Research to Practice

    DTIC Science & Technology

    2015-04-01

    protective transformations are applied to application binaries before they are deployed. Salient features of Kevlar include applying high- entropy ...variety of classes. Kevlar uses novel, fine-grained, high- entropy diversification transformations to prevent an attacker from successfully exploiting...Kevlar include applying high- entropy randomization techniques, automated program repairs, leveraging highly-optimized virtual machine technology, and in

  6. Fast measurement of proton exchange membrane fuel cell impedance based on pseudo-random binary sequence perturbation signals and continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Debenjak, Andrej; Boškoski, Pavle; Musizza, Bojan; Petrovčič, Janko; Juričić, Đani

    2014-05-01

    This paper proposes an approach to the estimation of PEM fuel cell impedance by utilizing pseudo-random binary sequence as a perturbation signal and continuous wavelet transform with Morlet mother wavelet. With the approach, the impedance characteristic in the frequency band from 0.1 Hz to 500 Hz is identified in 60 seconds, approximately five times faster compared to the conventional single-sine approach. The proposed approach was experimentally evaluated on a single PEM fuel cell of a larger fuel cell stack. The quality of the results remains at the same level compared to the single-sine approach.

  7. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  8. A Model for the Epigenetic Switch Linking Inflammation to Cell Transformation: Deterministic and Stochastic Approaches

    PubMed Central

    Gérard, Claude; Gonze, Didier; Lemaigre, Frédéric; Novák, Béla

    2014-01-01

    Recently, a molecular pathway linking inflammation to cell transformation has been discovered. This molecular pathway rests on a positive inflammatory feedback loop between NF-κB, Lin28, Let-7 microRNA and IL6, which leads to an epigenetic switch allowing cell transformation. A transient activation of an inflammatory signal, mediated by the oncoprotein Src, activates NF-κB, which elicits the expression of Lin28. Lin28 decreases the expression of Let-7 microRNA, which results in higher level of IL6 than achieved directly by NF-κB. In turn, IL6 can promote NF-κB activation. Finally, IL6 also elicits the synthesis of STAT3, which is a crucial activator for cell transformation. Here, we propose a computational model to account for the dynamical behavior of this positive inflammatory feedback loop. By means of a deterministic model, we show that an irreversible bistable switch between a transformed and a non-transformed state of the cell is at the core of the dynamical behavior of the positive feedback loop linking inflammation to cell transformation. The model indicates that inhibitors (tumor suppressors) or activators (oncogenes) of this positive feedback loop regulate the occurrence of the epigenetic switch by modulating the threshold of inflammatory signal (Src) needed to promote cell transformation. Both stochastic simulations and deterministic simulations of a heterogeneous cell population suggest that random fluctuations (due to molecular noise or cell-to-cell variability) are able to trigger cell transformation. Moreover, the model predicts that oncogenes/tumor suppressors respectively decrease/increase the robustness of the non-transformed state of the cell towards random fluctuations. Finally, the model accounts for the potential effect of competing endogenous RNAs, ceRNAs, on the dynamics of the epigenetic switch. Depending on their microRNA targets, the model predicts that ceRNAs could act as oncogenes or tumor suppressors by regulating the occurrence of cell transformation. PMID:24499937

  9. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  10. Chromosome Rearrangements Recovered following Transformation of Neurospora Crassa

    PubMed Central

    Perkins, D. D.; Kinsey, J. A.; Asch, D. K.; Frederick, G. D.

    1993-01-01

    New chromosome rearrangements were found in 10% or more of mitotically stable transformants. This was shown for transformations involving a variety of different markers, vectors and recipient strains. Breakpoints were randomly distributed among the seven linkage groups. Controls using untransformed protoplasts of the same strains contained almost no rearrangements. A study of molecularly characterized Am(+) transformants showed that rearrangements are frequent when multiple ectopic integration events have occurred. In contrast, rearrangements are absent or infrequent when only the resident locus is restored to am(+) by a homologous event. Sequences of the transforming vector were genetically linked to breakpoints in 6 of 10 translocations that were examined using Southern hybridization or colony blots. PMID:8349106

  11. Modeling and Simulation of Linear and Nonlinear MEMS Scale Electromagnetic Energy Harvesters for Random Vibration Environments

    PubMed Central

    Sassani, Farrokh

    2014-01-01

    The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063

  12. Image encryption with chaotic map and Arnold transform in the gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Sang, Jun; Luo, Hongling; Zhao, Jun; Alam, Mohammad S.; Cai, Bin

    2017-05-01

    An image encryption method combing chaotic map and Arnold transform in the gyrator transform domains was proposed. Firstly, the original secret image is XOR-ed with a random binary sequence generated by a logistic map. Then, the gyrator transform is performed. Finally, the amplitude and phase of the gyrator transform are permutated by Arnold transform. The decryption procedure is the inverse operation of encryption. The secret keys used in the proposed method include the control parameter and the initial value of the logistic map, the rotation angle of the gyrator transform, and the transform number of the Arnold transform. Therefore, the key space is large, while the key data volume is small. The numerical simulation was conducted to demonstrate the effectiveness of the proposed method and the security analysis was performed in terms of the histogram of the encrypted image, the sensitiveness to the secret keys, decryption upon ciphertext loss, and resistance to the chosen-plaintext attack.

  13. Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems

    NASA Astrophysics Data System (ADS)

    Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela

    2016-01-01

    Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.

  14. One-step random mutagenesis by error-prone rolling circle amplification

    PubMed Central

    Fujii, Ryota; Kitaoka, Motomitsu; Hayashi, Kiyoshi

    2004-01-01

    In vitro random mutagenesis is a powerful tool for altering properties of enzymes. We describe here a novel random mutagenesis method using rolling circle amplification, named error-prone RCA. This method consists of only one DNA amplification step followed by transformation of the host strain, without treatment with any restriction enzymes or DNA ligases, and results in a randomly mutated plasmid library with 3–4 mutations per kilobase. Specific primers or special equipment, such as a thermal-cycler, are not required. This method permits rapid preparation of randomly mutated plasmid libraries, enabling random mutagenesis to become a more commonly used technique. PMID:15507684

  15. Choice of optical system is critical for the security of double random phase encryption systems

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Cassidy, Derek; Zhao, Liang; Ryle, James P.; Healy, John J.; Sheridan, John T.

    2017-06-01

    The linear canonical transform (LCT) is used in modeling a coherent light-field propagation through first-order optical systems. Recently, a generic optical system, known as the quadratic phase encoding system (QPES), for encrypting a two-dimensional image has been reported. In such systems, two random phase keys and the individual LCT parameters (α,β,γ) serve as secret keys of the cryptosystem. It is important that such encryption systems also satisfy some dynamic security properties. We, therefore, examine such systems using two cryptographic evaluation methods, the avalanche effect and bit independence criterion, which indicate the degree of security of the cryptographic algorithms using QPES. We compared our simulation results with the conventional Fourier and the Fresnel transform-based double random phase encryption (DRPE) systems. The results show that the LCT-based DRPE has an excellent avalanche and bit independence characteristics compared to the conventional Fourier and Fresnel-based encryption systems.

  16. Evolution of basic equations for nearshore wave field

    PubMed Central

    ISOBE, Masahiko

    2013-01-01

    In this paper, a systematic, overall view of theories for periodic waves of permanent form, such as Stokes and cnoidal waves, is described first with their validity ranges. To deal with random waves, a method for estimating directional spectra is given. Then, various wave equations are introduced according to the assumptions included in their derivations. The mild-slope equation is derived for combined refraction and diffraction of linear periodic waves. Various parabolic approximations and time-dependent forms are proposed to include randomness and nonlinearity of waves as well as to simplify numerical calculation. Boussinesq equations are the equations developed for calculating nonlinear wave transformations in shallow water. Nonlinear mild-slope equations are derived as a set of wave equations to predict transformation of nonlinear random waves in the nearshore region. Finally, wave equations are classified systematically for a clear theoretical understanding and appropriate selection for specific applications. PMID:23318680

  17. Detecting a Gender-Related Differential Item Functioning Using Transformed Item Difficulty

    ERIC Educational Resources Information Center

    Abedalaziz, Nabeel; Leng, Chin Hai; Alahmadi, Ahlam

    2014-01-01

    The purpose of the study was to examine gender differences in performance on multiple-choice mathematical ability test, administered within the context of high school graduation test that was designed to match eleventh grade curriculum. The transformed item difficulty (TID) was used to detect a gender related DIF. A random sample of 1400 eleventh…

  18. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  19. Text String Detection from Natural Scenes by Structure-based Partition and Grouping

    PubMed Central

    Yi, Chucai; Tian, YingLi

    2012-01-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) Image partition to find text character candidates based on local gradient features and color uniformity of character components. 2) Character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method, and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in non-horizontal orientations. PMID:21411405

  20. Automated Coronal Loop Identification using Digital Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Lee, J. K.; Gary, G. A.; Newman, T. S.

    2003-05-01

    The results of a Master's thesis study of computer algorithms for automatic extraction and identification (i.e., collectively, "detection") of optically-thin, 3-dimensional, (solar) coronal-loop center "lines" from extreme ultraviolet and X-ray 2-dimensional images will be presented. The center lines, which can be considered to be splines, are proxies of magnetic field lines. Detecting the loops is challenging because there are no unique shapes, the loop edges are often indistinct, and because photon and detector noise heavily influence the images. Three techniques for detecting the projected magnetic field lines have been considered and will be described in the presentation. The three techniques used are (i) linear feature recognition of local patterns (related to the inertia-tensor concept), (ii) parametric space inferences via the Hough transform, and (iii) topological adaptive contours (snakes) that constrain curvature and continuity. Since coronal loop topology is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information that has also been incorporated into the detection process. Synthesized images have been generated to benchmark the suitability of the three techniques, and the performance of the three techniques on both synthesized and solar images will be presented and numerically evaluated in the presentation. The process of automatic detection of coronal loops is important in the reconstruction of the coronal magnetic field where the derived magnetic field lines provide a boundary condition for magnetic models ( cf. , Gary (2001, Solar Phys., 203, 71) and Wiegelmann & Neukirch (2002, Solar Phys., 208, 233)). . This work was supported by NASA's Office of Space Science - Solar and Heliospheric Physics Supporting Research and Technology Program.

  1. Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David

    1989-01-01

    Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  2. Automatic detection of DNA double strand breaks after irradiation using an γH2AX assay.

    PubMed

    Hohmann, Tim; Kessler, Jacqueline; Grabiec, Urszula; Bache, Matthias; Vordermark, Dyrk; Dehghani, Faramarz

    2018-05-01

    Radiation therapy belongs to the most common approaches for cancer therapy leading amongst others to DNA damage like double strand breaks (DSB). DSB can be used as a marker for the effect of radiation on cells. For visualization and assessing the extent of DNA damage the γH2AX foci assay is frequently used. The analysis of the γH2AX foci assay remains complicated as the number of γH2AX foci has to be counted. The quantification is mostly done manually, being time consuming and leading to person-dependent variations. Therefore, we present a method to automatically analyze the number of foci inside nuclei, facilitating and quickening the analysis of DSBs with high reliability in fluorescent images. First nuclei were detected in fluorescent images. Afterwards, the nuclei were analyzed independently from each other with a local thresholding algorithm. This approach allowed accounting for different levels of noise and detection of the foci inside the respective nucleus, using Hough transformation searching for circles. The presented algorithm was able to correctly classify most foci in cases of "high" and "average" image quality (sensitivity>0.8) with a low rate of false positive detections (positive predictive value (PPV)>0.98). In cases of "low" image quality the approach had a decreased sensitivity (0.7-0.9), depending on the manual control counter. The PPV remained high (PPV>0.91). Compared to other automatic approaches the presented algorithm had a higher sensitivity and PPV. The used automatic foci detection algorithm was capable of detecting foci with high sensitivity and PPV. Thus it can be used for automatic analysis of images of varying quality.

  3. Development of a technique for long-term detection of precursors of strong earthquakes using high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Soto-Pinto, C. A.; Arellano-Baeza, A. A.; Ouzounov, D. P.

    2012-12-01

    Among a variety of processes involved in seismic activity, the principal process is the accumulation and relaxation of stress in the crust, which takes place at the depth of tens of kilometers. While the Earth's surface bears at most the indirect sings of the accumulation and relaxation of the crust stress, it has long been understood that there is a strong correspondence between the structure of the underlying crust and the landscape. We assume the structure of the lineaments reflects an internal structure of the Earth's crust, and the variation of the lineament number and arrangement reflects the changes in the stress patterns related to the seismic activity. Contrary to the existing assumptions that lineament structure changes only at the geological timescale, we have found that the much faster seismic activity strongly affects the system of lineaments extracted from the high-resolution multispectral satellite images. Previous studies have shown that accumulation of the stress in the crust previous to a strong earthquake is directly related to the number increment and preferential orientation of lineament configuration present in the satellite images of epicenter zones. This effect increases with the earthquake magnitude and can be observed approximately since one month before. To study in details this effect we have developed a software based on a series of algorithms for automatic detection of lineaments. It was found that the Hough transform implemented after the application of discontinuity detection mechanisms like Canny edge detector or directional filters is the most robust technique for detection and characterization of changes in the lineament patterns related to strong earthquakes, which can be used as a robust long-term precursor of earthquakes indicating regions of strong stress accumulation.

  4. NASA Astrophysics Data System (ADS)

    Arellano-Baeza, A.

    2008-05-01

    Our studies have shown that the strain energy accumulation deep in the Earth's crust that precedes a strong earthquake can be estimated by applying a lineament extraction technique to the high-resolution multispectral satellite images. A lineament is a straight or a somewhat curved feature in a satellite image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. We analyzed tens of earthquakes occurred in the Pacific coast of the South America with the Richter scale magnitude > 4.5, using ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. All events were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only. It was found that the number and orientation of lineaments changed significantly about one month before an earthquake approximately, and a few months later the system returns to its initial state. This effect increases with the earthquake magnitude. It also was shown that the behavior of lineaments associated to the volcano seismic activity is opposite to that obtained previously for earthquakes. This discrepancy can be explained assuming that in the last case the main reason of earthquakes is compression and accumulation of strength in the Earth's crust due to subduction of tectonic plates, whereas in the first case we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion. The results obtained made it possible to include this research as a part of scientific program of Chilean Remote Sensing Satellite mission to be launched in 2010.

  5. Use of high resolution satellite images for monitoring of earthquakes and volcano activity.

    NASA Astrophysics Data System (ADS)

    Arellano-Baeza, Alonso A.

    Our studies have shown that the strain energy accumulation deep in the Earth's crust that precedes a strong earthquake can be detected by applying a lineament extraction technique to the high-resolution multispectral satellite images. A lineament is a straight or a somewhat curved feature in a satellite image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. We analyzed tens of earthquakes occurred in the Pacific coast of the South America with the Richter scale magnitude ˜4.5, using ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. All events were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only. It was found that the number and orientation of lineaments changed significantly about one month before an earthquake approximately, and a few months later the system returns to its initial state. This effect increases with the earthquake magnitude. It also was shown that the behavior of lineaments associated to the volcano seismic activity is opposite to that obtained previously for earthquakes. This discrepancy can be explained assuming that in the last case the main reason of earthquakes is compression and accumulation of strength in the Earth's crust due to subduction of tectonic plates, whereas in the first case we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion. The results obtained made it possible to include this research as a part of scientific program of Chilean Remote Sensing Satellite mission to be launched in 2010.

  6. Satellite Monitoring of Accumulation of Strain in the Earth's Crust Related to Seismic and Volcanic Activity

    NASA Astrophysics Data System (ADS)

    Arellano-Baeza, A. A.

    2009-12-01

    Our studies have shown that the strain energy accumulation deep in the Earth’s crust that precedes seismic and volcanic activity can be detected by applying a lineament extraction technique to the high-resolution multispectral satellite images. A lineament is a straight or a somewhat curved feature in a satellite image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. We analyzed tens of earthquakes occurred in the Pacific coast of the South America with the magnitude > 4 Mw, using ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. All events were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only. It was found that the number and orientation of lineaments changed significantly about one month before an earthquake approximately, and a few months later the system returns to its initial state. This effect increases with the earthquake magnitude. It also was shown that the behavior of lineaments associated to the volcano seismic activity is opposite to that obtained previously for earthquakes. This discrepancy can be explained assuming that in the last case the main reason of earthquakes is compression and accumulation of strength in the Earth’s crust due to subduction of tectonic plates, whereas in the first case we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion. The results obtained made it possible to include this research as a part of scientific program of Chilean Remote Sensing Satellite mission to be launched in 2010.

  7. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy

    PubMed Central

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-01-01

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy. PMID:29160812

  8. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy.

    PubMed

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-11-21

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy.

  9. Drawing for Traffic Marking Using Bidirectional Gradient-Based Detection with MMS LIDAR Intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Nakamura, K.

    2016-06-01

    Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS) equipped with high-density Laser imaging Detection and Ranging (LiDAR) scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014). The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.

  10. Vertebra identification using template matching modelmp and K-means clustering.

    PubMed

    Larhmam, Mohamed Amine; Benjelloun, Mohammed; Mahmoudi, Saïd

    2014-03-01

    Accurate vertebra detection and segmentation are essential steps for automating the diagnosis of spinal disorders. This study is dedicated to vertebra alignment measurement, the first step in a computer-aided diagnosis tool for cervical spine trauma. Automated vertebral segment alignment determination is a challenging task due to low contrast imaging and noise. A software tool for segmenting vertebrae and detecting subluxations has clinical significance. A robust method was developed and tested for cervical vertebra identification and segmentation that extracts parameters used for vertebra alignment measurement. Our contribution involves a novel combination of a template matching method and an unsupervised clustering algorithm. In this method, we build a geometric vertebra mean model. To achieve vertebra detection, manual selection of the region of interest is performed initially on the input image. Subsequent preprocessing is done to enhance image contrast and detect edges. Candidate vertebra localization is then carried out by using a modified generalized Hough transform (GHT). Next, an adapted cost function is used to compute local voted centers and filter boundary data. Thereafter, a K-means clustering algorithm is applied to obtain clusters distribution corresponding to the targeted vertebrae. These clusters are combined with the vote parameters to detect vertebra centers. Rigid segmentation is then carried out by using GHT parameters. Finally, cervical spine curves are extracted to measure vertebra alignment. The proposed approach was successfully applied to a set of 66 high-resolution X-ray images. Robust detection was achieved in 97.5 % of the 330 tested cervical vertebrae. An automated vertebral identification method was developed and demonstrated to be robust to noise and occlusion. This work presents a first step toward an automated computer-aided diagnosis system for cervical spine trauma detection.

  11. Machine Identification of Martian Craters Using Digital Elevation Data

    NASA Astrophysics Data System (ADS)

    Bue, B.; Stepinski, T. F.

    2005-12-01

    Impact craters are among the most studied features on Martian surface. Their importance stems from the worth of information that a detailed analysis of their number and morphology can bring forth. Because building manually a comprehensive dataset of craters is a laborious process, there have been many previous attempts to develop an automatic, image-based crater identifier. The resulting identifiers suffer from low efficiency and remain in an experimental stage. We have developed a DEM-based, fully autonomous crater identifier that takes an arbitrarily large Martian site as an input and produces a catalog of craters as an output. Using the topography data we calculate a topographic profile curvature that is thresholded to produce a binary image, pixels having maximum negative curvature are labeled black, the remaining pixels are labeled white. The black pixels outline craters because crater rims are the most convex feature in the Martian landscape. The Hough Transform (HT) is used for an actual recognition of craters in the binary image. The image is first segmented (without cutting the craters) into a large number of smaller images using the ``flood" algorithm that identifies basins. This segmentation makes possible the application of highly inefficient HT to large sites. The identifier is applied to a 106 km2 site located around the Herschel crater. According to the Barlow catalog, this site contains 485 craters >5 km. Our identifier finds 1099 segments, 628 of them are classified as craters >5 km. Overall, there is an excellent agreement between the two catalogs, although the specific statistics are still pending due to the difficulties in recalculating the MDIM 1 coordinate system used in the Barlow catalog to the MDIM 2.1 coordinate system used by our identifier.

  12. Text string detection from natural scenes by structure-based partition and grouping.

    PubMed

    Yi, Chucai; Tian, YingLi

    2011-09-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations.

  13. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    PubMed

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  14. A GENERAL RELATIVISTIC NULL HYPOTHESIS TEST WITH EVENT HORIZON TELESCOPE OBSERVATIONS OF THE BLACK HOLE SHADOW IN Sgr A*

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan

    2015-12-01

    The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc{sup 2}, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to showmore » that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.« less

  15. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    NASA Astrophysics Data System (ADS)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  16. A PET/CT approach to spinal cord metabolism in amyotrophic lateral sclerosis.

    PubMed

    Marini, Cecilia; Cistaro, Angelina; Campi, Cristina; Calvo, Andrea; Caponnetto, Claudia; Nobili, Flavio Mariano; Fania, Piercarlo; Beltrametti, Mauro C; Moglia, Cristina; Novi, Giovanni; Buschiazzo, Ambra; Perasso, Annalisa; Canosa, Antonio; Scialò, Carlo; Pomposelli, Elena; Massone, Anna Maria; Bagnara, Maria Caludia; Cammarosano, Stefania; Bruzzi, Paolo; Morbelli, Silvia; Sambuceti, Gianmario; Mancardi, Gianluigi; Piana, Michele; Chiò, Adriano

    2016-10-01

    In amyotrophic lateral sclerosis, functional alterations within the brain have been intensively assessed, while progression of lower motor neuron damage has scarcely been defined. The aim of the present study was to develop a computational method to systematically evaluate spinal cord metabolism as a tool to monitor disease mechanisms. A new computational three-dimensional method to extract the spinal cord from (18)F-FDG PET/CT images was evaluated in 30 patients with spinal onset amyotrophic lateral sclerosis and 30 controls. The algorithm identified the skeleton on the CT images by using an extension of the Hough transform and then extracted the spinal canal and the spinal cord. In these regions, (18)F-FDG standardized uptake values were measured to estimate the metabolic activity of the spinal canal and cord. Measurements were performed in the cervical and dorsal spine and normalized to the corresponding value in the liver. Uptake of (18)F-FDG in the spinal cord was significantly higher in patients than in controls (p < 0.05). By contrast, no significant differences were observed in spinal cord and spinal canal volumes between the two groups. (18)F-FDG uptake was completely independent of age, gender, degree of functional impairment, disease duration and riluzole treatment. Kaplan-Meier analysis showed a higher mortality rate in patients with standardized uptake values above the fifth decile at the 3-year follow-up evaluation (log-rank test, p < 0.01). The independence of this value was confirmed by multivariate Cox analysis. Our computational three-dimensional method enabled the evaluation of spinal cord metabolism and volume and might represent a potential new window onto the pathophysiology of amyotrophic lateral sclerosis.

  17. The safety helmet detection technology and its application to the surveillance system.

    PubMed

    Wen, Che-Yen

    2004-07-01

    The Automatic Teller Machine (ATM) plays an important role in the modem economy. It provides a fast and convenient way to process transactions between banks and their customers. Unfortunately, it also provides a convenient way for criminals to get illegal money or use stolen ATM cards to extract money from their victims' accounts. For safety reasons, each ATM has a surveillance system to record customer's face information. However, when criminals use an ATM to withdraw money illegally, they usually hide their faces with something (in Taiwan, criminals usually use safety helmets to block their faces) to avoid the surveillance system recording their face information, which decreases the efficiency of the surveillance system. In this paper, we propose a circle/circular arc detection method based upon the modified Hough transform, and apply it to the detection of safety helmets for the surveillance system of ATMs. Since the safety helmet location will be within the set of the obtainable circles/circular arcs (if any exist), we use geometric features to verify if any safety helmet exists in the set. The proposed method can be used to help the surveillance systems record a customer's face information more precisely. If customers wear safety helmets to block their faces, the system can send a message to remind them to take off their helmets. Besides this, the method can be applied to the surveillance systems of banks by providing an early warning safeguard when any "customer" or "intruder" uses a safety helmet to avoid his/her face information from being recorded by the surveillance system. This will make the surveillance system more useful. Real images are used to analyze the performance of the proposed method.

  18. A Graph Theory Practice on Transformed Image: A Random Image Steganography

    PubMed Central

    Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857

  19. Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.

    2018-04-01

    The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.

  20. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  1. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Effect of magnetic field on the flux pinning mechanisms in Al and SiC co-doped MgB2 superconductor

    NASA Astrophysics Data System (ADS)

    Kia, N. S.; Ghorbani, S. R.; Arabi, H.; Hossain, M. S. A.

    2018-07-01

    MgB2 superconductor samples co-doped with 0.02 wt. Al2O3 and 0-0.05 wt. SiC were studied by magnetization - magnetic field (M-H) loop measurements at different temperatures. The critical current density has been calculated by the Bean model, and the irreversibility field, Hirr, has been obtained by the Kramer method. The pinning mechanism of the co-doped sample with 2% Al and 5% SiC was investigated in particular due to its having the highest Hirr. The normalized volume pinning force f = F/Fmax as a function of reduced magnetic field h = H/Hirr has been obtained, and the pinning mechanism was studied by the Dew-Houghes model. It was found that the normal point pinning (NPP), the normal surface pinning (NSP), and the normal volume pinning (NVP) mechanisms play the main roles. The magnetic field and temperature dependence of contributions of the NPP, NSP, and NVP pinning mechanisms were obtained. The results show that the contributions of the pinning mechanisms depend on the temperature and magnetic field. From the temperature dependence of the critical current density within the collective pinning theory, it was found that both the δl pinning due to spatial fluctuations of the charge-carrier mean free path and the δTc pinning due to randomly distributed spatial variations in the transition temperature coexist at zero magnetic field in co-doped samples. Yet, the charge-carrier mean-free-path fluctuation pinning (δl) is the only important pinning mechanism at non-zero magnetic fields.

  3. Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means

    Treesearch

    W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren

    1997-01-01

    Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...

  4. The Mediating Role of Principals' Transformational Leadership Behaviors in Promoting Teachers' Emotional Wellness at Work: A Study in Israeli Primary Schools

    ERIC Educational Resources Information Center

    Berkovich, Izhak; Eyal, Ori

    2017-01-01

    The present study aims to examine whether principals' emotional intelligence (specifically, their ability to recognize emotions in others) makes them more effective transformational leaders, measured by the reframing of teachers' emotions. The study uses multisource data from principals and their teachers in 69 randomly sampled primary schools.…

  5. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  6. 2D non-separable linear canonical transform (2D-NS-LCT) based cryptography

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Muniraj, Inbarasan; Healy, John J.; Malallah, Ra'ed; Cui, Xiao-Guang; Ryle, James P.; Sheridan, John T.

    2017-05-01

    The 2D non-separable linear canonical transform (2D-NS-LCT) can describe a variety of paraxial optical systems. Digital algorithms to numerically evaluate the 2D-NS-LCTs are not only important in modeling the light field propagations but also of interest in various signal processing based applications, for instance optical encryption. Therefore, in this paper, for the first time, a 2D-NS-LCT based optical Double-random- Phase-Encryption (DRPE) system is proposed which offers encrypting information in multiple degrees of freedom. Compared with the traditional systems, i.e. (i) Fourier transform (FT); (ii) Fresnel transform (FST); (iii) Fractional Fourier transform (FRT); and (iv) Linear Canonical transform (LCT), based DRPE systems, the proposed system is more secure and robust as it encrypts the data with more degrees of freedom with an augmented key-space.

  7. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  8. The Lambert Way to Gaussianize Heavy-Tailed Data with the Inverse of Tukey's h Transformation as a Special Case

    PubMed Central

    Goerg, Georg M.

    2015-01-01

    I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert  W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372

  9. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  10. Phase transformation changes in thermocycled nickel-titanium orthodontic wires.

    PubMed

    Berzins, David W; Roberts, Howard W

    2010-07-01

    In the oral environment, orthodontic wires will be subject to thermal fluctuations. The purpose of this study was to investigate the effect of thermocycling on nickel-titanium (NiTi) wire phase transformations. Straight segments from single 27 and 35 degrees C copper NiTi (Ormco), Sentalloy (GAC), and Nitinol Heat Activated (3M Unitek) archwires were sectioned into 5mm segments (n=20). A control group consisted of five randomly selected non-thermocycled segments. The remaining segments were thermocycled between 5 and 55 degrees C with five randomly selected segments analyzed with differential scanning calorimetry (DSC; -100<-->150 degrees C at 10 degrees C/min) after 1000, 5000, and 10,000 cycles. Thermal peaks were evaluated with results analyzed via ANOVA (alpha=0.05). Nitinol HA and Sentalloy did not demonstrate qualitative or quantitative phase transformation behavior differences. Significant differences were observed in some of the copper NiTi transformation temperatures, as well as the heating enthalpy with the 27 degrees C copper NiTi wires (p<0.05). Qualitatively, with increased thermocycling the extent of R-phase in the heating peaks decreased in the 35 degrees C copper NiTi, and an austenite to martensite peak shoulder developed during cooling in the 27 degrees C copper NiTi. Repeated temperature fluctuations may contribute to qualitative and quantitative phase transformation changes in some NiTi wires. Copyright 2010 Academy of Dental Materials. All rights reserved.

  11. Efficient Agrobacterium-mediated transformation of the liverwort Marchantia polymorpha using regenerating thalli.

    PubMed

    Kubota, Akane; Ishizaki, Kimitsune; Hosaka, Masashi; Kohchi, Takayuki

    2013-01-01

    The thallus, the gametophyte body of the liverwort Marchantia polymorpha, develops clonal progenies called gemmae that are useful in the isolation and propagation of isogenic plants. Developmental timing is critical to Agrobacterium-mediated transformation, and high transformation efficiency has been achieved only with sporelings. Here we report an Agrobacterium-mediated transformation system for M. polymorpha using regenerating thalli. Thallus regeneration was induced by cutting the mature thallus across the apical-basal axis and incubating the basal portion of the thallus for 3 d. Regenerating thalli were infected with Agrobacterium carrying binary vector that contained a selection marker, the hygromycin phosphotransferase gene, and hygromycin-resistant transformants were obtained with an efficiency of over 60%. Southern blot analysis verified random integration of 1 to 4 copies of the T-DNA into the M. polymorpha genome. This Agrobacterium-mediated transformation system for M. polymorpha should provide opportunities to perform genetic transformation without preparing spores and to generate a sufficient number of transformants with isogenic background.

  12. The Extent of Principals' Application of the Transformational Leadership and Its Relationship to the Level of Job Satisfaction among Teachers of Galilee Region

    ERIC Educational Resources Information Center

    Haj, Sohil Jameel; Jubran, Ali Mohammed

    2016-01-01

    The current study aimed to identify the degree of applying the transformational leadership in school administration (among principals), the level of job satisfaction among teachers, and investigate the relationship to each other. The sample consisted of (182) teachers, who were randomly selected from teachers of Galilee region inside the Green…

  13. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  14. Robust Learning Control Design for Quantum Unitary Transformations.

    PubMed

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  15. Initial assessment of the intensity distribution of the 2011 Mw5.8 Mineral, Virginia, earthquake

    USGS Publications Warehouse

    Hough, Susan E.

    2012-01-01

    The intensity data collected by the U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) Website (USGS, DYFI; http://earthquake.usgs.gov/earthquakes/dyfi/events/se/082311a/us/index.html, last accessed Sept 2011) for the Mw5.8 Mineral, Virginia, earthquake, are unprecedented in their spatial richness and geographical extent. More than 133,000 responses were received during the first week following the earthquake. Although intensity data have traditionally been regarded as imprecise and generally suspect (e.g., Hough 2000), there is a growing appreciation for the potential utility of spatially rich, systematically determined DYFI data to address key questions in earthquake ground-motions science (Atkinson and Wald, 2007; Hauksson et al., 2008).

  16. Creation of hybrid optoelectronic systems for document identification

    NASA Astrophysics Data System (ADS)

    Muravsky, Leonid I.; Voronyak, Taras I.; Kulynych, Yaroslav P.; Maksymenko, Olexander P.; Pogan, Ignat Y.

    2001-06-01

    Use of security devices based on a joint transform correlator (JTC) architecture for identification of credit cards and other products is very promising. The experimental demonstration of the random phase encoding technique for security verification shows that hybrid JTCs can be successfully utilized. The random phase encoding technique provides a very high protection level of products and things to be identified. However, the realization of this technique is connected with overcoming of the certain practical problems. To solve some of these problems and simultaneously to improve the security of documents and other products, we propose to use a transformed phase mask (TPM) as an input object in an optical correlator. This mask is synthesized from a random binary pattern (RBP), which is directly used to fabricate a reference phase mask (RPM). To obtain the TPM, we previously separate the RBP on a several parts (for example, K parts) of an arbitrary shape and further fabricate the TPM from this transformed RBP. The fabricated TPM can be bonded as the optical mark to any product or thing to be identified. If the RPM and the TPM are placed on the optical correlator input, the first diffracted order of the output correlation signal is containing the K narrow autocorrelation peaks. The distances between the peaks and the peak's intensities can be treated as the terms of the identification feature vector (FV) for the TPM identification.

  17. “SALOME gave my dignity back”: The role of randomized heroin trials in transforming lives in the Downtown Eastside of Vancouver, Canada

    PubMed Central

    Jozaghi, Ehsan

    2014-01-01

    Although numerous studies on heroin-assisted treatment (HAT) have been published in leading international journals, little attention has been given to HAT’s clients, their stories, and what constitutes the most influential factor in the treatment process. The present study investigates the role of HAT in transforming the lives of injection drug users (IDUs) in Vancouver, Canada. This study is qualitative focusing on 16 in-depth interviews with patients from the randomized trials of HAT. Interviews were transcribed verbatim and analyzed thematically using NVivo 10 software. The findings revealed a positive change in many respects: the randomized trials reduce criminal activity, sex work, and illicit drug use. In addition, the trials improved the health and social functioning of its clients, with some participants acquiring work or volunteer positions. Many of the participants have been able to reconnect with their family members, which was not possible before the program. Furthermore, the relationship between the staff and patients at the project appears to have transformed the behavior of participants. Attending HAT in Vancouver has been particularly effective in creating a unique microenvironment where IDUs who have attended HAT have been able to form a collective identity advocating for their rights. The result of this research points to the need for continuation of the project beyond the current study, leading toward a permanent program. PMID:24646474

  18. "SALOME gave my dignity back": the role of randomized heroin trials in transforming lives in the Downtown Eastside of Vancouver, Canada.

    PubMed

    Jozaghi, Ehsan

    2014-01-01

    Although numerous studies on heroin-assisted treatment (HAT) have been published in leading international journals, little attention has been given to HAT's clients, their stories, and what constitutes the most influential factor in the treatment process. The present study investigates the role of HAT in transforming the lives of injection drug users (IDUs) in Vancouver, Canada. This study is qualitative focusing on 16 in-depth interviews with patients from the randomized trials of HAT. Interviews were transcribed verbatim and analyzed thematically using NVivo 10 software. The findings revealed a positive change in many respects: the randomized trials reduce criminal activity, sex work, and illicit drug use. In addition, the trials improved the health and social functioning of its clients, with some participants acquiring work or volunteer positions. Many of the participants have been able to reconnect with their family members, which was not possible before the program. Furthermore, the relationship between the staff and patients at the project appears to have transformed the behavior of participants. Attending HAT in Vancouver has been particularly effective in creating a unique microenvironment where IDUs who have attended HAT have been able to form a collective identity advocating for their rights. The result of this research points to the need for continuation of the project beyond the current study, leading toward a permanent program.

  19. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  20. Cryptosystem for Securing Image Encryption Using Structured Phase Masks in Fresnel Wavelet Transform Domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-12-01

    A cryptosystem for securing image encryption is considered by using double random phase encoding in Fresnel wavelet transform (FWT) domain. Random phase masks (RPMs) and structured phase masks (SPMs) based on devil's vortex toroidal lens (DVTL) are used in spatial as well as in Fourier planes. The images to be encrypted are first Fresnel transformed and then single-level discrete wavelet transform (DWT) is apply to decompose LL,HL, LH and HH matrices. The resulting matrices from the DWT are multiplied by additional RPMs and the resultants are subjected to inverse DWT for the encrypted images. The scheme is more secure because of many parameters used in the construction of SPM. The original images are recovered by using the correct parameters of FWT and SPM. Phase mask SPM based on DVTL increases security that enlarges the key space for encryption and decryption. The proposed encryption scheme is a lens-less optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The computed value of mean-squared-error between the retrieved and the input images shows the efficacy of scheme. The sensitivity to encryption parameters, robustness against occlusion, entropy and multiplicative Gaussian noise attacks have been analysed.

  1. Multilevel Models for Intensive Longitudinal Data with Heterogeneous Autoregressive Errors: The Effect of Misspecification and Correction with Cholesky Transformation

    PubMed Central

    Jahng, Seungmin; Wood, Phillip K.

    2017-01-01

    Intensive longitudinal studies, such as ecological momentary assessment studies using electronic diaries, are gaining popularity across many areas of psychology. Multilevel models (MLMs) are most widely used analytical tools for intensive longitudinal data (ILD). Although ILD often have individually distinct patterns of serial correlation of measures over time, inferences of the fixed effects, and random components in MLMs are made under the assumption that all variance and autocovariance components are homogenous across individuals. In the present study, we introduced a multilevel model with Cholesky transformation to model ILD with individually heterogeneous covariance structure. In addition, the performance of the transformation method and the effects of misspecification of heterogeneous covariance structure were investigated through a Monte Carlo simulation. We found that, if individually heterogeneous covariances are incorrectly assumed as homogenous independent or homogenous autoregressive, MLMs produce highly biased estimates of the variance of random intercepts and the standard errors of the fixed intercept and the fixed effect of a level 2 covariate when the average autocorrelation is high. For intensive longitudinal data with individual specific residual covariance, the suggested transformation method showed lower bias in those estimates than the misspecified models when the number of repeated observations within individuals is 50 or more. PMID:28286490

  2. Low-pass parabolic FFT filter for airborne and satellite lidar signal processing.

    PubMed

    Jiao, Zhongke; Liu, Bo; Liu, Enhai; Yue, Yongjian

    2015-10-14

    In order to reduce random errors of the lidar signal inversion, a low-pass parabolic fast Fourier transform filter (PFFTF) was introduced for noise elimination. A compact airborne Raman lidar system was studied, which applied PFFTF to process lidar signals. Mathematics and simulations of PFFTF along with low pass filters, sliding mean filter (SMF), median filter (MF), empirical mode decomposition (EMD) and wavelet transform (WT) were studied, and the practical engineering value of PFFTF for lidar signal processing has been verified. The method has been tested on real lidar signal from Wyoming Cloud Lidar (WCL). Results show that PFFTF has advantages over the other methods. It keeps the high frequency components well and reduces much of the random noise simultaneously for lidar signal processing.

  3. Self-organization of maze-like structures via guided wrinkling.

    PubMed

    Bae, Hyung Jong; Bae, Sangwook; Yoon, Jinsik; Park, Cheolheon; Kim, Kibeom; Kwon, Sunghoon; Park, Wook

    2017-06-01

    Sophisticated three-dimensional (3D) structures found in nature are self-organized by bottom-up natural processes. To artificially construct these complex systems, various bottom-up fabrication methods, designed to transform 2D structures into 3D structures, have been developed as alternatives to conventional top-down lithography processes. We present a different self-organization approach, where we construct microstructures with periodic and ordered, but with random architecture, like mazes. For this purpose, we transformed planar surfaces using wrinkling to directly use randomly generated ridges as maze walls. Highly regular maze structures, consisting of several tessellations with customized designs, were fabricated by precisely controlling wrinkling with the ridge-guiding structure, analogous to the creases in origami. The method presented here could have widespread applications in various material systems with multiple length scales.

  4. Fast-match on particle swarm optimization with variant system mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuang; Fang, Xin; Chen, Jie

    2018-03-01

    Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.

  5. Joint transform correlator optical encryption system: Extensions of the recorded encrypted signal and its inverse Fourier transform

    NASA Astrophysics Data System (ADS)

    Galizzi, Gustavo E.; Cuadrado-Laborde, Christian

    2015-10-01

    In this work we study the joint transform correlator setup, finding two analytical expressions for the extensions of the joint power spectrum and its inverse Fourier transform. We found that an optimum efficiency is reached, when the bandwidth of the key code is equal to the sum of the bandwidths of the image plus the random phase mask (RPM). The quality of the decryption is also affected by the ratio between the bandwidths of the RPM and the input image, being better as this ratio increases. In addition, the effect on the decrypted image when the detection area is lower than the encrypted signal extension was analyzed. We illustrate these results through several numerical examples.

  6. Speckle lithography for fabricating Gaussian, quasi-random 2D structures and black silicon structures.

    PubMed

    Bingi, Jayachandra; Murukeshan, Vadakke Matham

    2015-12-18

    Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices.

  7. Studies in astronomical time series analysis: Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  8. Multifractal surrogate-data generation algorithm that preserves pointwise Hölder regularity structure, with initial applications to turbulence

    NASA Astrophysics Data System (ADS)

    Keylock, C. J.

    2017-03-01

    An algorithm is described that can generate random variants of a time series while preserving the probability distribution of original values and the pointwise Hölder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the iterated amplitude adjusted wavelet transform can be used to generate bootstrapped versions of multifractal data, and because it preserves the pointwise Hölder regularity but not the local Hölder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.

  9. Measurement Matrix Design for Phase Retrieval Based on Mutual Information

    NASA Astrophysics Data System (ADS)

    Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.

    2018-01-01

    In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.

  10. Synchronization invariance under network structural transformations

    NASA Astrophysics Data System (ADS)

    Arola-Fernández, Lluís; Díaz-Guilera, Albert; Arenas, Alex

    2018-06-01

    Synchronization processes are ubiquitous despite the many connectivity patterns that complex systems can show. Usually, the emergence of synchrony is a macroscopic observable; however, the microscopic details of the system, as, e.g., the underlying network of interactions, is many times partially or totally unknown. We already know that different interaction structures can give rise to a common functionality, understood as a common macroscopic observable. Building upon this fact, here we propose network transformations that keep the collective behavior of a large system of Kuramoto oscillators invariant. We derive a method based on information theory principles, that allows us to adjust the weights of the structural interactions to map random homogeneous in-degree networks into random heterogeneous networks and vice versa, keeping synchronization values invariant. The results of the proposed transformations reveal an interesting principle; heterogeneous networks can be mapped to homogeneous ones with local information, but the reverse process needs to exploit higher-order information. The formalism provides analytical insight to tackle real complex scenarios when dealing with uncertainty in the measurements of the underlying connectivity structure.

  11. FIBER OPTICS. ACOUSTOOPTICS: Compression of random pulses in fiber waveguides

    NASA Astrophysics Data System (ADS)

    Aleshkevich, Viktor A.; Kozhoridze, G. D.

    1990-07-01

    An investigation is made of the compression of randomly modulated signal + noise pulses during their propagation in a fiber waveguide. An allowance is made for a cubic nonlinearity and quadratic dispersion. The relationships governing the kinetics of transformation of the time envelope, and those which determine the duration and intensity of a random pulse are derived. The expressions for the optimal length of a fiber waveguide and for the maximum degree of compression are compared with the available data for regular pulses and the recommendations on selection of the optimal parameters are given.

  12. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Understanding the magnitude dependence of PGA and PGV in NGA-West 2 data

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.

    2014-01-01

    The Next Generation Attenuation‐West 2 (NGA‐West 2) 2014 ground‐motion prediction equations (GMPEs) model ground motions as a function of magnitude and distance, using empirically derived coefficients (e.g., Bozorgniaet al., 2014); as such, these GMPEs do not clearly employ earthquake source parameters beyond moment magnitude (M) and focal mechanism. To better understand the magnitude‐dependent trends in the GMPEs, we build a comprehensive earthquake source‐based model to explain the magnitude dependence of peak ground acceleration and peak ground velocity in the NGA‐West 2 ground‐motion databases and GMPEs. Our model employs existing models (Hanks and McGuire, 1981; Boore, 1983, 1986; Anderson and Hough, 1984) that incorporate a point‐source Brune model, including a constant stress drop and the high‐frequency attenuation parameter κ0, random vibration theory, and a finite‐fault assumption at the large magnitudes to describe the data from magnitudes 3 to 8. We partition this range into four different magnitude regions, each of which has different functional dependences on M. Use of the four magnitude partitions separately allows greater understanding of what happens in any one subrange, as well as the limiting conditions between the subranges. This model provides a remarkably good fit to the NGA data for magnitudes from 3

  14. Overproduction of recombinant laccase using a homologous expression system in Coriolus versicolor.

    PubMed

    Kajita, Shinya; Sugawara, Shinsuke; Miyazaki, Yasumasa; Nakamura, Masaya; Katayama, Yoshihiro; Shishido, Kazuo; Iimura, Yosuke

    2004-12-01

    One of the major extracellular enzymes of the white-rot fungus Coriolus versicolor is laccase, which is involved in the degradation of lignin. We constructed a homologous system for the expression of a gene for laccase III (cvl3) in C. versicolor, using a chimeric laccase gene driven by the promoter of a gene for glyceraldehyde-3-phosphate dehydrogenase (gpd) from this fungus. We transformed C. versicolor successfully by introducing both a gene for hygromycin B phosphotransferase (hph) and the chimeric laccase gene. In three independent experiments, we recovered 47 hygromycin-resistant transformants at a transformation frequency of 13 transformants microg(-1) of plasmid DNA. We confirmed the introduction of the chimeric laccase gene into the mycelia of transformants by a polymerase chain reaction in nine randomly selected transformants. Overproduction of extracellular laccase by the transformants was revealed by a colorimetric assay for laccase activity. We examined the transformant (T2) that had the highest laccase activity and found that its activity was significantly higher than that of the wild type, particularly in the presence of copper (II). Our transformation system should contribute to the efficient production of the extracellular proteins of C. versicolor for the accelerated degradation of lignin and aromatic pollutants.

  15. Using random forests for assistance in the curation of G-protein coupled receptor databases.

    PubMed

    Shkurin, Aleksei; Vellido, Alfredo

    2017-08-18

    Biology is experiencing a gradual but fast transformation from a laboratory-centred science towards a data-centred one. As such, it requires robust data engineering and the use of quantitative data analysis methods as part of database curation. This paper focuses on G protein-coupled receptors, a large and heterogeneous super-family of cell membrane proteins of interest to biology in general. One of its families, Class C, is of particular interest to pharmacology and drug design. This family is quite heterogeneous on its own, and the discrimination of its several sub-families is a challenging problem. In the absence of known crystal structure, such discrimination must rely on their primary amino acid sequences. We are interested not as much in achieving maximum sub-family discrimination accuracy using quantitative methods, but in exploring sequence misclassification behavior. Specifically, we are interested in isolating those sequences showing consistent misclassification, that is, sequences that are very often misclassified and almost always to the same wrong sub-family. Random forests are used for this analysis due to their ensemble nature, which makes them naturally suited to gauge the consistency of misclassification. This consistency is here defined through the voting scheme of their base tree classifiers. Detailed consistency results for the random forest ensemble classification were obtained for all receptors and for all data transformations of their unaligned primary sequences. Shortlists of the most consistently misclassified receptors for each subfamily and transformation, as well as an overall shortlist including those cases that were consistently misclassified across transformations, were obtained. The latter should be referred to experts for further investigation as a data curation task. The automatic discrimination of the Class C sub-families of G protein-coupled receptors from their unaligned primary sequences shows clear limits. This study has investigated in some detail the consistency of their misclassification using random forest ensemble classifiers. Different sub-families have been shown to display very different discrimination consistency behaviors. The individual identification of consistently misclassified sequences should provide a tool for quality control to GPCR database curators.

  16. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  17. Surface and Bulk Carbide Transformations in High-Speed Steel

    PubMed Central

    Godec, M.; Večko Pirtovšek, T.; Šetina Batič, B.; McGuiness, P.; Burja, J.; Podgornik, B.

    2015-01-01

    We have studied the transformation of carbides in AISI M42 high-speed steels in the temperature window used for forging. The annealing was found to result in the partial transformation of the large, metastable M2C carbides into small, more stable grains of M6C, with an associated change in the crystal orientation. In addition, MC carbides form during the transformation of M2C to M6C. From the high-speed-steel production point of view, it is beneficial to have large, metastable carbides in the cast structure, which later during annealing, before the forging, transform into a structure of polycrystalline carbides. Such carbides can be easily decomposed into several small carbides, which are then randomly distributed in the microstructure. The results also show an interesting difference in the carbide-transformation reactions on the surface versus the bulk of the alloy, which has implications for in-situ studies of bulk phenomena that are based on surface observations. PMID:26537780

  18. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  19. The Shock and Vibration Bulletin. Part 3. Vehicle Dynamics and Vibration: Test and Criteria.

    DTIC Science & Technology

    1983-05-01

    transformation. As stability is assumed in forward motion. used here it invariably means the Hydraulic suspension is formed for each group static...are used to calcu- late the random rms stress according to the type Tolerable sound pressure levels were of structure. Appropriate random S-N curves...DC AIRCRAFT SURVIVABILITY Dale B. Atkinson, Chairman, Joint Technical Coordinating Group on Aircraft Survivability, Naval Air Systems Command

  20. Comparison of Image Processing Techniques using Random Noise Radar

    DTIC Science & Technology

    2014-03-27

    detection UWB ultra-wideband EM electromagnetic CW continuous wave RCS radar cross section RFI radio frequency interference FFT fast Fourier transform...several factors including radar cross section (RCS), orientation, and material makeup. A single monostatic radar at some position collects only range and...Chapter 2 is to provide the theory behind noise radar and SAR imaging. Section 2.1 presents the basic concepts in transmitting and receiving random

  1. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  2. Opto-digital spectrum encryption by using Baker mapping and gyrator transform

    NASA Astrophysics Data System (ADS)

    Chen, Hang; Zhao, Jiguang; Liu, Zhengjun; Du, Xiaoping

    2015-03-01

    A concept of spectrum information hidden technology is proposed in this paper. We present an optical encryption algorithm for hiding both the spatial and spectrum information by using the Baker mapping in gyrator transform domains. The Baker mapping is introduced for scrambling the every single band of the hyperspectral image before adding the random phase functions. Subsequently, three thin cylinder lenses are controlled by PC for implementing the gyrator transform. The amplitude and phase information in the output plane can be regarded as the encrypted information and main key. Some numerical simulations are made to test the validity and capability of the proposed encryption algorithm.

  3. Phase transformation of dental zirconia following artificial aging.

    PubMed

    Lucas, Thomas J; Lawson, Nathaniel C; Janowski, Gregg M; Burgess, John O

    2015-10-01

    Low-temperature degradation (LTD) of yttria-stabilized zirconia can produce increased surface roughness with a concomitant decrease in strength. This study determined the effectiveness of artificial aging (prolonged boiling/autoclaving) to induce LTD of Y-TZP (yttria-tetragonal zirconia-polycrystals) and used artificial aging for transformation depth progression analyses. The null hypothesis is aging techniques tested produce the same amount of transformation, transformation is not time/temperature dependent and LTD causes a constant transformation throughout the Y-TZP samples. Dental-grade Y-TZP samples were randomly divided into nine subgroups (n = 5): as received, 3.5 and 7 day boiling, 1 bar autoclave (1, 3, 5 h), and 2 bar autoclave (1, 3, 5 h). A 4-h boil treatment (n = 2) was performed post-experiment for completion of data. Transformation was measured using traditional X-ray diffraction and low-angle X-ray diffraction. The fraction of t → m transformation increased with aging time. The 3.5 day boil and 2 bar 5 h autoclave produced similar transformation results, while the 7 day boiling treatment revealed the greatest transformation. The surface layer of the aged specimen underwent the most transformation while all samples displayed decreasing transformation with depth. Surface transformation was evident, which can lead to rougher surfaces and increased wear of opposing dentition/materials. Therefore, wear studies addressing LTD of Y-TZP are needed utilizing accelerated aging. © 2014 Wiley Periodicals, Inc.

  4. Finite size effects in phase transformation kinetics in thin films and surface layers

    NASA Astrophysics Data System (ADS)

    Trofimov, Vladimir I.; Trofimov, Ilya V.; Kim, Jong-Il

    2004-02-01

    In studies of phase transformation kinetics in thin films, e.g. crystallization of amorphous films, until recent time is widely used familiar Kolmogorov-Johnson-Mehl-Avrami (KJMA) statistical model of crystallization despite it is applicable only to an infinite medium. In this paper a model of transformation kinetics in thin films based on a concept of the survival probability for randomly chosen point during transformation process is presented. Two model versions: volume induced transformation (VIT) when the second-phase grains nucleate over a whole film volume and surface induced transformation (SIT) when they form on an interface with two nucleation mode: instantaneous nucleation at transformation onset and continuous one during all the process are studied. At VIT-process due to the finite film thickness effects the transformation profile has a maximum in a film middle, whereas that of the grains population reaches a minimum inhere, the grains density is always higher than in a volume material, and the thinner film the slower it transforms. The transformation kinetics in a thin film obeys a generalized KJMA equation with parameters depending on a film thickness and in limiting cases of extremely thin and thick film it reduces to classical KJMA equation for 2D- and 3D-system, respectively.

  5. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    NASA Astrophysics Data System (ADS)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  6. Functional form and risk adjustment of hospital costs: Bayesian analysis of a Box-Cox random coefficients model.

    PubMed

    Hollenbeak, Christopher S

    2005-10-15

    While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.

  7. Red ball ranging optimization based on dual camera ranging method

    NASA Astrophysics Data System (ADS)

    Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung

    2018-05-01

    In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.

  8. Stereoscopy and Tomography of Coronal Structures

    NASA Astrophysics Data System (ADS)

    de Patoul, J.

    2012-04-01

    The hot solar corona consists of a low density plasma, which is highly structured by the magnetic field. To resolve and study the corona, several solar Ultraviolet (UV) and X-ray telescopes are operated with high spatial and temporal resolution. EUV (Extreme UV) image sequences of the lower solar corona have revealed a wide variety of structures with sizes ranging from the Sun's diameter to the limit of the angular resolution. Active regions can be observed with enhanced temperature and density, as well as 'quiet' regions, coronal holes with lower density and numerous other transient phenomena such as plumes, jets, bright points, flares, filaments, coronal mass ejections, all structured by the coronal magnetic field. In this work, we analyze polar plumes in a sequence of Solar EUV images taken nearly simultaneously by the three telescopes on board of the spacecraft STEREO/SECCHI A and B, and SOHO/EIT. Plumes appear in EUV images as elongated objects starting on the surface of the Sun extending super-radially into the corona. Their formation and contribution to the fast solar wind and other coronal phenomena are still under debate. Knowledge of the polar plume 3-D geometry can help to understand some of the physical processes in the solar corona. In this dissertation we develop new techniques for the characterization of polar plume structures in solar coronal images (Part II) then we analyze these structures using the techniques (Part III): We design a new technique capable of automatically identifying plumes in solar EUV images close to the limb at 1.01-1.39 Ro. This plume identification is based on a multi-scale Hough-wavelet analysis. We show that the method is well adapted to identifying the location, width and orientation of plumes. Starting from Hough-wavelet analysis, we elaborate on two other techniques to determine 3-D plume localization and structure: (i) tomography employing data from a single spacecraft over more than half a rotation and (ii) stereoscopy from simultaneous data observed by two or more spacecrafts. For tomography, we consider the filtered back projection method for which we incorporate the differential rotation of the Sun. For stereoscopy, we use three view directions for a conventional stereoscopic triangulation. These multi-scale Hough-wavelet analyses, stereoscopy and tomography extensions have been applied for the first time in a coronal plumes study. The temporal evolution of the mean orientation of plumes from May 2007 to April 2008 is then analyzed and discussed. Since the plume orientation is assumed to follow the coronal magnetic field, this analysis reveals: (i) a mean orientation of plumes more horizontal than for a dipole magnetic field, (ii) an asymmetry of the coronal open polar cap magnetic field from the solar rotation axis by up to 6° and (iii) a variation of these orientation and asymmetry over the year. Finally, with the help of the reconstructed 3-D geometry of the plumes, we study in detail their temporal evolution as well as the shape and size of their cross sections. The study reveals: (i) different lifetimes of plumes from 2-3 days up to 9 days and (ii) the presence of both near-circular plume cross sections and plumes with curtain-like structures. Also discussed is the plumes positions and their relation to other coronal phenomena such as coronal holes and jets. Plumes are found to be located inside coronal holes, and jets could explain the intensity enhancement within the plumes.

  9. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  10. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  11. Simulation of random road microprofile based on specified correlation function

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.

    2018-03-01

    The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.

  12. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    DTIC Science & Technology

    2017-03-01

    inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse

  13. Speckle lithography for fabricating Gaussian, quasi-random 2D structures and black silicon structures

    PubMed Central

    Bingi, Jayachandra; Murukeshan, Vadakke Matham

    2015-01-01

    Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices. PMID:26679513

  14. Color image encryption by using Yang-Gu mixture amplitude-phase retrieval algorithm in gyrator transform domain and two-dimensional Sine logistic modulation map

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli

    2015-12-01

    A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.

  15. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  16. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  17. Secret sharing based on quantum Fourier transform

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Huang, Liusheng; Shi, Runhua; He, Libao

    2013-07-01

    Secret sharing plays a fundamental role in both secure multi-party computation and modern cryptography. We present a new quantum secret sharing scheme based on quantum Fourier transform. This scheme enjoys the property that each share of a secret is disguised with true randomness, rather than classical pseudorandomness. Moreover, under the only assumption that a top priority for all participants (secret sharers and recovers) is to obtain the right result, our scheme is able to achieve provable security against a computationally unbounded attacker.

  18. Efficient Quantum Pseudorandomness.

    PubMed

    Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał

    2016-04-29

    Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.

  19. Projection correlation between two random vectors.

    PubMed

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  20. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

Top