A method of extracting impervious surface based on rule algorithm
NASA Astrophysics Data System (ADS)
Peng, Shuangyun; Hong, Liang; Xu, Quanli
2018-02-01
The impervious surface has become an important index to evaluate the urban environmental quality and measure the development level of urbanization. At present, the use of remote sensing technology to extract impervious surface has become the main way. In this paper, a method to extract impervious surface based on rule algorithm is proposed. The main ideas of the method is to use the rule-based algorithm to extract impermeable surface based on the characteristics and the difference which is between the impervious surface and the other three types of objects (water, soil and vegetation) in the seven original bands, NDWI and NDVI. The steps can be divided into three steps: 1) Firstly, the vegetation is extracted according to the principle that the vegetation is higher in the near-infrared band than the other bands; 2) Then, the water is extracted according to the characteristic of the water with the highest NDWI and the lowest NDVI; 3) Finally, the impermeable surface is extracted based on the fact that the impervious surface has a higher NDWI value and the lowest NDVI value than the soil.In order to test the accuracy of the rule algorithm, this paper uses the linear spectral mixed decomposition algorithm, the CART algorithm, the NDII index algorithm for extracting the impervious surface based on six remote sensing image of the Dianchi Lake Basin from 1999 to 2014. Then, the accuracy of the above three methods is compared with the accuracy of the rule algorithm by using the overall classification accuracy method. It is found that the extraction method based on the rule algorithm is obviously higher than the above three methods.
Active surface model improvement by energy function optimization for 3D segmentation.
Azimifar, Zohreh; Mohaddesi, Mahsa
2015-04-01
This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extraction of edge-based and region-based features for object recognition
NASA Astrophysics Data System (ADS)
Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima
1993-08-01
One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.
Analysis of soil moisture extraction algorithm using data from aircraft experiments
NASA Technical Reports Server (NTRS)
Burke, H. H. K.; Ho, J. H.
1981-01-01
A soil moisture extraction algorithm is developed using a statistical parameter inversion method. Data sets from two aircraft experiments are utilized for the test. Multifrequency microwave radiometric data surface temperature, and soil moisture information are contained in the data sets. The surface and near surface ( or = 5 cm) soil moisture content can be extracted with accuracy of approximately 5% to 6% for bare fields and fields with grass cover by using L, C, and X band radiometer data. This technique is used for handling large amounts of remote sensing data from space.
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
Three-dimensional contour edge detection algorithm
NASA Astrophysics Data System (ADS)
Wang, Yizhou; Ong, Sim Heng; Kassim, Ashraf A.; Foong, Kelvin W. C.
2000-06-01
This paper presents a novel algorithm for automatically extracting 3D contour edges, which are points of maximum surface curvature in a surface range image. The 3D image data are represented as a surface polygon mesh. The algorithm transforms the range data, obtained by scanning a dental plaster cast, into a 2D gray scale image by linearly converting the z-value of each vertex to a gray value. The Canny operator is applied to the median-filtered image to obtain the edge pixels and their orientations. A vertex in the 3D object corresponding to the detected edge pixel and its neighbors in the direction of the edge gradient are further analyzed with respect to their n-curvatures to extract the real 3D contour edges. This algorithm provides a fast method of reducing and sorting the unwieldy data inherent in the surface mesh representation. It employs powerful 2D algorithms to extract features from the transformed 3D models and refers to the 3D model for further analysis of selected data. This approach substantially reduces the computational burden without losing accuracy. It is also easily extended to detect 3D landmarks and other geometrical features, thus making it applicable to a wide range of applications.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing
2016-11-01
The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.
A research of road centerline extraction algorithm from high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yushan; Xu, Tingfa
2017-09-01
Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2006-03-01
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
Flattening maps for the visualization of multibranched vessels.
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2005-02-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.
Flattening Maps for the Visualization of Multibranched Vessels
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2013-01-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245
An algorithm for automatic parameter adjustment for brain extraction in BrainSuite
NASA Astrophysics Data System (ADS)
Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.
2017-02-01
Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.
Acetabular rim and surface segmentation for hip surgery planning and dysplasia evaluation
NASA Astrophysics Data System (ADS)
Tan, Sovira; Yao, Jianhua; Yao, Lawrence; Summers, Ronald M.; Ward, Michael M.
2008-03-01
Knowledge of the acetabular rim and surface can be invaluable for hip surgery planning and dysplasia evaluation. The acetabular rim can also be used as a landmark for registration purposes. At the present time acetabular features are mostly extracted manually at great cost of time and human labor. Using a recent level set algorithm that can evolve on the surface of a 3D object represented by a triangular mesh we automatically extracted rims and surfaces of acetabulae. The level set is guided by curvature features on the mesh. It can segment portions of a surface that are bounded by a line of extremal curvature (ridgeline or crestline). The rim of the acetabulum is such an extremal curvature line. Our material consists of eight hemi-pelvis surfaces. The algorithm is initiated by putting a small circle (level set seed) at the center of the acetabular surface. Because this surface distinctively has the form of a cup we were able to use the Shape Index feature to automatically extract an approximate center. The circle then expands and deforms so as to take the shape of the acetabular rim. The results were visually inspected. Only minor errors were detected. The algorithm also proved to be robust. Seed placement was satisfactory for the eight hemi-pelvis surfaces without changing any parameters. For the level set evolution we were able to use a single set of parameters for seven out of eight surfaces.
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Fast and Exact Fiber Surfaces for Tetrahedral Meshes.
Klacansky, Pavol; Tierny, Julien; Carr, Hamish; Zhao Geng
2017-07-01
Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results.
Learning-based meta-algorithm for MRI brain extraction.
Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang
2011-01-01
Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.
Detection of lettuce discoloration using hyperspectral reflectance imaging
USDA-ARS?s Scientific Manuscript database
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to classify the discoloration of lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectra...
Using input feature information to improve ultraviolet retrieval in neural networks
NASA Astrophysics Data System (ADS)
Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina
2017-09-01
In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.
NASA Astrophysics Data System (ADS)
Ishimori, Hiroyuki; Kawata, Yoshiki; Niki, Noboru; Nakaya, Yoshihiro; Ohmatsu, Hironobu; Matsui, Eisuke; Fujii, Masashi; Moriyama, Noriyuki
2007-03-01
We have developed a Micro CT system for understanding lung function at a high resolution of the micrometer order (up to 5µm in spatial resolution). Micro CT system enables the removal specimen of lungs to be observed at micro level, has expected a big contribution for micro internal organs morphology and the image diagnosis study. In this research, we develop system to visualize lung microstructures in three dimensions from micro CT images and analyze them. They characterize in that high CT value of the noise area is, and the difficulty of only using threshold processing to extract the alveolar wall of micro CT images. Thus, we are developing a method of extracting the alveolar wall with surface thinning algorithm. In this report, we propose the method which reduces the excessive degeneracy of figure which caused by surface thinning process. And, we apply this algorithm to the micro CT image of the actual pulmonary specimen. It is shown that the extraction of the alveolus wall becomes possible in the high precision.
Utilization of high-frequency Rayleigh waves in near-surface geophysics
Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.
2004-01-01
Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.
Yet another method for triangulation and contouring for automated cartography
NASA Technical Reports Server (NTRS)
De Floriani, L.; Falcidieno, B.; Nasy, G.; Pienovi, C.
1982-01-01
An algorithm is presented for hierarchical subdivision of a set of three-dimensional surface observations. The data structure used for obtaining the desired triangulation is also singularly appropriate for extracting contours. Some examples are presented, and the results obtained are compared with those given by Delaunay triangulation. The data points selected by the algorithm provide a better approximation to the desired surface than do randomly selected points.
Algorithms for extraction of structural attitudes from 3D outcrop models
NASA Astrophysics Data System (ADS)
Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos
2016-05-01
The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.
Sinha, S K; Karray, F
2002-01-01
Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.
NASA Astrophysics Data System (ADS)
Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao
2018-06-01
In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.
On Feature Extraction from Large Scale Linear LiDAR Data
NASA Astrophysics Data System (ADS)
Acharjee, Partha Pratim
Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are presented. Significant power demand is located in urban areas, where, theoretically, a large amount of building surface area is also available for solar panel installation. Therefore, property owners and power generation companies can benefit from a citywide solar potential map, which can provide available estimated annual solar energy at a given location. An efficient solar potential measurement is a prerequisite for an effective solar energy system in an urban area. In addition, the solar potential calculation from rooftops and building facades could open up a wide variety of options for solar panel installations. However, complex urban scenes make it hard to estimate the solar potential, partly because of shadows cast by the buildings. LiDAR-based 3D city models could possibly be the right technology for solar potential mapping. Although, most of the current LiDAR-based local solar potential assessment algorithms mainly address rooftop potential calculation, whereas building facades can contribute a significant amount of viable surface area for solar panel installation. In this paper, we introduce a new algorithm to calculate solar potential of both rooftop and building facades. Solar potential received by the rooftops and facades over the year are also investigated in the test area.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
NASA Technical Reports Server (NTRS)
Quek, Kok How Francis
1990-01-01
A method of computing reliable Gaussian and mean curvature sign-map descriptors from the polynomial approximation of surfaces was demonstrated. Such descriptors which are invariant under perspective variation are suitable for hypothesis generation. A means for determining the pose of constructed geometric forms whose algebraic surface descriptors are nonlinear in terms of their orienting parameters was developed. This was done by means of linear functions which are capable of approximating nonlinear forms and determining their parameters. It was shown that biquadratic surfaces are suitable companion linear forms for cylindrical approximation and parameter estimation. The estimates provided the initial parametric approximations necessary for a nonlinear regression stage to fine tune the estimates by fitting the actual nonlinear form to the data. A hypothesis-based split-merge algorithm for extraction and pose determination of cylinders and planes which merge smoothly into other surfaces was developed. It was shown that all split-merge algorithms are hypothesis-based. A finite-state algorithm for the extraction of the boundaries of run-length regions was developed. The computation takes advantage of the run list topology and boundary direction constraints implicit in the run-length encoding.
NASA Astrophysics Data System (ADS)
Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.
2017-12-01
Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.
Automatic crack detection method for loaded coal in vibration failure process
Li, Chengwu
2017-01-01
In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM). A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically. PMID:28973032
Automatic crack detection method for loaded coal in vibration failure process.
Li, Chengwu; Ai, Dihao
2017-01-01
In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM). A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.
Knee cartilage extraction and bone-cartilage interface analysis from 3D MRI data sets
NASA Astrophysics Data System (ADS)
Tamez-Pena, Jose G.; Barbu-McInnis, Monica; Totterman, Saara
2004-05-01
This works presents a robust methodology for the analysis of the knee joint cartilage and the knee bone-cartilage interface from fused MRI sets. The proposed approach starts by fusing a set of two 3D MR images the knee. Although the proposed method is not pulse sequence dependent, the first sequence should be programmed to achieve good contrast between bone and cartilage. The recommended second pulse sequence is one that maximizes the contrast between cartilage and surrounding soft tissues. Once both pulse sequences are fused, the proposed bone-cartilage analysis is done in four major steps. First, an unsupervised segmentation algorithm is used to extract the femur, the tibia, and the patella. Second, a knowledge based feature extraction algorithm is used to extract the femoral, tibia and patellar cartilages. Third, a trained user corrects cartilage miss-classifications done by the automated extracted cartilage. Finally, the final segmentation is the revisited using an unsupervised MAP voxel relaxation algorithm. This final segmentation has the property that includes the extracted bone tissue as well as all the cartilage tissue. This is an improvement over previous approaches where only the cartilage was segmented. Furthermore, this approach yields very reproducible segmentation results in a set of scan-rescan experiments. When these segmentations were coupled with a partial volume compensated surface extraction algorithm the volume, area, thickness measurements shows precisions around 2.6%
NASA Technical Reports Server (NTRS)
Volz, R. A.; Shao, L.; Walker, M. W.; Conway, L. A.
1989-01-01
The object localization algorithm based on line-segment matching is presented. The method is very simple and computationally fast. In most cases, closed-form formulas are used to derive the solution. The method is also quite flexible, because only few surfaces (one or two) need to be accessed (sensed) to gather necessary range data. For example, if the line-segments are extracted from boundaries of a planar surface, only parameters of one surface and two of its boundaries need to be extracted, as compared with traditional point-surface matching or line-surface matching algorithms which need to access at least three surfaces in order to locate a planar object. Therefore, this method is especially suitable for applications when an object is surrounded by many other work pieces and most of the object is very difficult, is not impossible, to be measured; or when not all parts of the object can be reached. The theoretical ground on how to use line range sensor to located an object was laid. Much work has to be done in order to be really useful.
Combined distributed and concentrated transducer network for failure indication
NASA Astrophysics Data System (ADS)
Ostachowicz, Wieslaw; Wandowski, Tomasz; Malinowski, Pawel
2010-03-01
In this paper algorithm for discontinuities localisation in thin panels made of aluminium alloy is presented. Mentioned algorithm uses Lamb wave propagation methods for discontinuities localisation. Elastic waves were generated and received using piezoelectric transducers. They were arranged in concentrated arrays distributed on the specimen surface. In this way almost whole specimen could be monitored using this combined distributed-concentrated transducer network. Excited elastic waves propagate and reflect from panel boundaries and discontinuities existing in the panel. Wave reflection were registered through the piezoelectric transducers and used in signal processing algorithm. Proposed processing algorithm consists of two parts: signal filtering and extraction of obstacles location. The first part was used in order to enhance signals by removing noise from them. Second part allowed to extract features connected with wave reflections from discontinuities. Extracted features damage influence maps were a basis to create damage influence maps. Damage maps indicated intensity of elastic wave reflections which corresponds to obstacles coordinates. Described signal processing algorithms were implemented in the MATLAB environment. It should be underlined that in this work results based only on experimental signals were presented.
Automatic analysis and classification of surface electromyography.
Abou-Chadi, F E; Nashar, A; Saad, M
2001-01-01
In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
Plasma spectrum peak extraction algorithm of laser film damage
NASA Astrophysics Data System (ADS)
Zhao, Dan; Su, Jun-hong; Xu, Jun-qi
2012-10-01
The plasma spectrometry is an emerging method to distinguish the thin-film laser damage. Laser irradiation film surface occurrence of flash, using the spectrometer receives the flash spectrum, extracting the spectral peak, and by means of the spectra of the thin-film materials and the atmosphere has determine the difference, as a standard to determine the film damage. Plasma spectrometry can eliminate the miscarriage of justice which caused by atmospheric flashes, and distinguish high accuracy. Plasma spectra extraction algorithm is the key technology of Plasma spectrometry. Firstly, data de noising and smoothing filter is introduced in this paper, and then during the peak is detecting, the data packet is proposed, and this method can increase the stability and accuracy of the spectral peak recognition. Such algorithm makes simultaneous measurement of Plasma spectrometry to detect thin film laser damage, and greatly improves work efficiency.
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-01-01
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-11-20
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.
Measuring river from the cloud - River width algorithm development on Google Earth Engine
NASA Astrophysics Data System (ADS)
Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.
2017-12-01
Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Luo, Y.; Xu, Y.; Liu, Q.; Xia, J.
2008-01-01
In recent years, multichannel analysis of surface waves (MASW) has been increasingly used for obtaining vertical shear-wave velocity profiles within near-surface materials. MASW uses a multichannel recording approach to capture the time-variant, full-seismic wavefield where dispersive surface waves can be used to estimate near-surface S-wave velocity. The technique consists of (1) acquisition of broadband, high-frequency ground roll using a multichannel recording system; (2) efficient and accurate algorithms that allow the extraction and analysis of 1D Rayleigh-wave dispersion curves; (3) stable and efficient inversion algorithms for estimating S-wave velocity profiles; and (4) construction of the 2D S-wave velocity field map.
NASA Astrophysics Data System (ADS)
Korzeniowska, Karolina; Mandlburger, Gottfried; Klimczyk, Agata
2013-04-01
The paper presents an evaluation of different terrain point extraction algorithms for Airborne Laser Scanning (ALS) point clouds. The research area covers eight test sites in the Małopolska Province (Poland) with varying point density between 3-15points/m² and surface as well as land cover characteristics. In this paper the existing implementations of algorithms were considered. Approaches based on mathematical morphology, progressive densification, robust surface interpolation and segmentation were compared. From the group of morphological filters, the Progressive Morphological Filter (PMF) proposed by Zhang K. et al. (2003) in LIS software was evaluated. From the progressive densification filter methods developed by Axelsson P. (2000) the Martin Isenburg's implementation in LAStools software (LAStools, 2012) was chosen. The third group of methods are surface-based filters. In this study, we used the hierarchic robust interpolation approach by Kraus K., Pfeifer N. (1998) as implemented in SCOP++ (Trimble, 2012). The fourth group of methods works on segmentation. From this filtering concept the segmentation algorithm available in LIS was tested (Wichmann V., 2012). The main aim in executing the automatic classification for ground extraction was operating in default mode or with default parameters which were selected by the developers of the algorithms. It was assumed that the default settings were equivalent to the parameters on which the best results can be achieved. In case it was not possible to apply an algorithm in default mode, a combination of the available and most crucial parameters for ground extraction were selected. As a result of these analyses, several output LAS files with different ground classification were achieved. The results were described on the basis of qualitative and quantitative analyses, both being in a formal description. The classification differences were verified on point cloud data. Qualitative verification of ground extraction was made on the basis of a visual inspection of the results (Sithole G., Vosselman G., 2004; Meng X. et al., 2010). The results of these analyses were described as a graph using weighted assumption. The quantitative analyses were evaluated on a basis of Type I, Type II and Total errors (Sithole G., Vosselman G., 2003). The achieved results show that the analysed algorithms yield different classification accuracies depending on the landscape and land cover. The simplest terrain for ground extraction was flat rural area with sparse vegetation. The most difficult were mountainous areas with very dense vegetation where only a few ground points were available. Generally the LAStools algorithm gives good results in every type of terrain, but the ground surface is too smooth. The LIS Progressive Morphological Filter algorithm gives good results in forested flat and low slope areas. The surface-based algorithm from SCOP++ gives good results in mountainous areas - both forested and built-up because it better preserves steep slopes, sharp ridges and breaklines, but sometimes it fails to remove off-terrain objects from the ground class. The segmentation-based algorithm in LIS gives quite good results in built-up flat areas, but in forested areas it does not work well. Bibliography: Axelsson, P., 2000. DEM generation from laser scanner data using adaptive TIN models. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIII (Pt. B4/1), 110- 117 Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with airborne laser scanner data. ISPRS Journal of Photogrammetry & Remote Sensing 53 (4), 193-203 LAStools website http://www.cs.unc.edu/~isenburg/lastools/ (verified in September 2012) Meng, X., Currit, N., Zhao, K., 2010. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sensing 2, 833-860 Sithole, G., Vosselman, G., 2003. Report: ISPRS Comparison of Filters. Commission III, Working Group 3. Department of Geodesy, Faculty of Civil Engineering and Geosciences, Delft University of technology, The Netherlands Sithole, G., Vosselman, G., 2004. Experimental comparison of filter algorithms for bare-Earth extraction form airborne laser scanning point clouds. ISPRS Journal of Photogrammetry & Remote Sensing 59, 85-101 Trimble, 2012 http://www.trimble.com/geospatial/aerial-software.aspx (verified in November 2012) Wichmann, V., 2012. LIS Command Reference, LASERDATA GmbH, 1-231 Zhang, K., Chen, S.-C., Whitman, D., Shyu, M.-L., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing non-ground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing, 41(4), 872-882
Vision-based surface defect inspection for thick steel plates
NASA Astrophysics Data System (ADS)
Yun, Jong Pil; Kim, Dongseob; Kim, KyuHwan; Lee, Sang Jun; Park, Chang Hyun; Kim, Sang Woo
2017-05-01
There are several types of steel products, such as wire rods, cold-rolled coils, hot-rolled coils, thick plates, and electrical sheets. Surface stains on cold-rolled coils are considered defects. However, surface stains on thick plates are not considered defects. A conventional optical structure is composed of a camera and lighting module. A defect inspection system that uses a dual lighting structure to distinguish uneven defects and color changes by surface noise is proposed. In addition, an image processing algorithm that can be used to detect defects is presented in this paper. The algorithm consists of a Gabor filter that detects the switching pattern and employs the binarization method to extract the shape of the defect. The optics module and detection algorithm optimized using a simulator were installed at a real plant, and the experimental results conducted on thick steel plate images obtained from the steel production line show the effectiveness of the proposed method.
Vision-based measurement for rotational speed by improving Lucas-Kanade template tracking algorithm.
Guo, Jie; Zhu, Chang'an; Lu, Siliang; Zhang, Dashan; Zhang, Chunyu
2016-09-01
Rotational angle and speed are important parameters for condition monitoring and fault diagnosis of rotating machineries, and their measurement is useful in precision machining and early warning of faults. In this study, a novel vision-based measurement algorithm is proposed to complete this task. A high-speed camera is first used to capture the video of the rotational object. To extract the rotational angle, the template-based Lucas-Kanade algorithm is introduced to complete motion tracking by aligning the template image in the video sequence. Given the special case of nonplanar surface of the cylinder object, a nonlinear transformation is designed for modeling the rotation tracking. In spite of the unconventional and complex form, the transformation can realize angle extraction concisely with only one parameter. A simulation is then conducted to verify the tracking effect, and a practical tracking strategy is further proposed to track consecutively the video sequence. Based on the proposed algorithm, instantaneous rotational speed (IRS) can be measured accurately and efficiently. Finally, the effectiveness of the proposed algorithm is verified on a brushless direct current motor test rig through the comparison with results obtained by the microphone. Experimental results demonstrate that the proposed algorithm can extract accurately rotational angles and can measure IRS with the advantage of noncontact and effectiveness.
An Algorithm of Association Rule Mining for Microbial Energy Prospection
Shaheen, Muhammad; Shahbaz, Muhammad
2017-01-01
The presence of hydrocarbons beneath earth’s surface produces some microbiological anomalies in soils and sediments. The detection of such microbial populations involves pure bio chemical processes which are specialized, expensive and time consuming. This paper proposes a new algorithm of context based association rule mining on non spatial data. The algorithm is a modified form of already developed algorithm which was for spatial database only. The algorithm is applied to mine context based association rules on microbial database to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve. The surface and soil manifestations caused by the presence of hydrocarbon oxidizing microbes are selected from existing literature and stored in a shared database. The algorithm is applied on the said database to generate direct and indirect associations among the stored microbial indicators. These associations are then correlated with the probability of hydrocarbon’s existence. The numerical evaluation shows better accuracy for non-spatial data as compared to conventional algorithms at generating reliable and robust rules. PMID:28393846
Brain segmentation and the generation of cortical surfaces
NASA Technical Reports Server (NTRS)
Joshi, M.; Cui, J.; Doolittle, K.; Joshi, S.; Van Essen, D.; Wang, L.; Miller, M. I.
1999-01-01
This paper describes methods for white matter segmentation in brain images and the generation of cortical surfaces from the segmentations. We have developed a system that allows a user to start with a brain volume, obtained by modalities such as MRI or cryosection, and constructs a complete digital representation of the cortical surface. The methodology consists of three basic components: local parametric modeling and Bayesian segmentation; surface generation and local quadratic coordinate fitting; and surface editing. Segmentations are computed by parametrically fitting known density functions to the histogram of the image using the expectation maximization algorithm [DLR77]. The parametric fits are obtained locally rather than globally over the whole volume to overcome local variations in gray levels. To represent the boundary of the gray and white matter we use triangulated meshes generated using isosurface generation algorithms [GH95]. A complete system of local parametric quadratic charts [JWM+95] is superimposed on the triangulated graph to facilitate smoothing and geodesic curve tracking. Algorithms for surface editing include extraction of the largest closed surface. Results for several macaque brains are presented comparing automated and hand surface generation. Copyright 1999 Academic Press.
Pediatric Brain Extraction Using Learning-based Meta-algorithm
Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2012-01-01
Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859
NASA Astrophysics Data System (ADS)
Hori, Yasuaki; Yasuno, Yoshiaki; Sakai, Shingo; Matsumoto, Masayuki; Sugawara, Tomoko; Madjarova, Violeta; Yamanari, Masahiro; Makita, Shuichi; Yasui, Takeshi; Araki, Tsutomu; Itoh, Masahide; Yatagai, Toyohiko
2006-03-01
A set of fully automated algorithms that is specialized for analyzing a three-dimensional optical coherence tomography (OCT) volume of human skin is reported. The algorithm set first determines the skin surface of the OCT volume, and a depth-oriented algorithm provides the mean epidermal thickness, distribution map of the epidermis, and a segmented volume of the epidermis. Subsequently, an en face shadowgram is produced by an algorithm to visualize the infundibula in the skin with high contrast. The population and occupation ratio of the infundibula are provided by a histogram-based thresholding algorithm and a distance mapping algorithm. En face OCT slices at constant depths from the sample surface are extracted, and the histogram-based thresholding algorithm is again applied to these slices, yielding a three-dimensional segmented volume of the infundibula. The dermal attenuation coefficient is also calculated from the OCT volume in order to evaluate the skin texture. The algorithm set examines swept-source OCT volumes of the skins of several volunteers, and the results show the high stability, portability and reproducibility of the algorithm.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Object recognition and pose estimation of planar objects from range data
NASA Technical Reports Server (NTRS)
Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael
1994-01-01
The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
NASA Astrophysics Data System (ADS)
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
Multi-Scale Voxel Segmentation for Terrestrial Lidar Data within Marshes
NASA Astrophysics Data System (ADS)
Nguyen, C. T.; Starek, M. J.; Tissot, P.; Gibeaut, J. C.
2016-12-01
The resilience of marshes to a rising sea is dependent on their elevation response. Terrestrial laser scanning (TLS) is a detailed topographic approach for accurate, dense surface measurement with high potential for monitoring of marsh surface elevation response. The dense point cloud provides a 3D representation of the surface, which includes both terrain and non-terrain objects. Extraction of topographic information requires filtering of the data into like-groups or classes, therefore, methods must be incorporated to identify structure in the data prior to creation of an end product. A voxel representation of three-dimensional space provides quantitative visualization and analysis for pattern recognition. The objectives of this study are threefold: 1) apply a multi-scale voxel approach to effectively extract geometric features from the TLS point cloud data, 2) investigate the utility of K-means and Self Organizing Map (SOM) clustering algorithms for segmentation, and 3) utilize a variety of validity indices to measure the quality of the result. TLS data were collected at a marsh site along the central Texas Gulf Coast using a Riegl VZ 400 TLS. The site consists of both exposed and vegetated surface regions. To characterize structure of the point cloud, octree segmentation is applied to create a tree data structure of voxels containing the points. The flexibility of voxels in size and point density makes this algorithm a promising candidate to locally extract statistical and geometric features of the terrain including surface normal and curvature. The characteristics of the voxel itself such as the volume and point density are also computed and assigned to each point as are laser pulse characteristics. The features extracted from the voxelization are then used as input for clustering of the points using the K-means and SOM clustering algorithms. Optimal number of clusters are then determined based on evaluation of cluster separability criterions. Results for different combinations of the feature space vector and differences between K-means and SOM clustering will be presented. The developed method provides a novel approach for compressing TLS scene complexity in marshes, such as for vegetation biomass studies or erosion monitoring.
Adaptive optics based non-null interferometry for optical free form surfaces test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhou, Sheng; Li, Jingsong; Yu, Benli
2018-03-01
An adaptive optics based non-null interferometry (ANI) is proposed for optical free form surfaces testing, in which an open-loop deformable mirror (DM) is employed as a reflective compensator, to compensate various low-order aberrations flexibly. The residual wavefront aberration is treated by the multi-configuration ray tracing (MCRT) algorithm. The MCRT algorithm based on the simultaneous ray tracing for multiple system models, in which each model has different DM surface deformation. With the MCRT algorithm, the final figure error can be extracted together with the surface misalignment aberration correction after the initial system calibration. The flexible test for free form surface is achieved with high accuracy, without auxiliary device for DM deformation monitoring. Experiments proving the feasibility, repeatability and high accuracy of the ANI were carried out to test a bi-conic surface and a paraboloidal surface, with a high stable ALPAOTM DM88. The accuracy of the final test result of the paraboloidal surface was better than 1/20 Μ PV value. It is a successful attempt in research of flexible optical free form surface metrology and would have enormous potential in future application with the development of the DM technology.
Karimi, Mohammad H; Asemani, Davud
2014-05-01
Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun
2016-08-01
The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge contamination and fragmentation. In terms ofclimate impact, the results indicate that land surface temperature can be related to land use/land cover classes.
NASA Astrophysics Data System (ADS)
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907
Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .
Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds
Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang
2015-01-01
Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059
Road marking features extraction using the VIAPIX® system
NASA Astrophysics Data System (ADS)
Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.
2016-07-01
Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Genetic algorithms used for the optimization of light-emitting diodes and solar thermal collectors
NASA Astrophysics Data System (ADS)
Mayer, Alexandre; Bay, Annick; Gaouyat, Lucie; Nicolay, Delphine; Carletti, Timoteo; Deparis, Olivier
2014-09-01
We present a genetic algorithm (GA) we developed for the optimization of light-emitting diodes (LED) and solar thermal collectors. The surface of a LED can be covered by periodic structures whose geometrical and material parameters must be adjusted in order to maximize the extraction of light. The optimization of these parameters by the GA enabled us to get a light-extraction efficiency η of 11.0% from a GaN LED (for comparison, the flat material has a light-extraction efficiency η of only 3.7%). The solar thermal collector we considered consists of a waffle-shaped Al substrate with NiCrOx and SnO2 conformal coatings. We must in this case maximize the solar absorption α while minimizing the thermal emissivity ɛ in the infrared. A multi-objective genetic algorithm has to be implemented in this case in order to determine optimal geometrical parameters. The parameters we obtained using the multi-objective GA enable α~97.8% and ɛ~4.8%, which improves results achieved previously when considering a flat substrate. These two applications demonstrate the interest of genetic algorithms for addressing complex problems in physics.
NASA Astrophysics Data System (ADS)
Darlow, Luke Nicholas; Connan, James
2015-11-01
Surface fingerprint scanners are limited to a two-dimensional representation of the fingerprint topography, and thus, are vulnerable to fingerprint damage, distortion, and counterfeiting. Optical coherence tomography (OCT) scanners are able to image (in three dimensions) the internal structure of the fingertip skin. Techniques for obtaining the internal fingerprint from OCT scans have since been developed. This research presents an internal fingerprint extraction algorithm designed to extract high-quality internal fingerprints from touchless OCT fingertip scans. Furthermore, it serves as a correlation study between surface and internal fingerprints. Provided the scanned region contains sufficient fingerprint information, correlation to the surface topography is shown to be good (74% have true matches). The cross-correlation of internal fingerprints (96% have true matches) is substantial that internal fingerprints can constitute a fingerprint database. The internal fingerprints' performance was also compared to the performance of cropped surface counterparts, to eliminate bias owing to information level present, showing that the internal fingerprints' performance is superior 63.6% of the time.
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
Estimating Advective Near-surface Currents from Ocean Color Satellite Images
2015-01-01
of surface current information. The present study uses the sequential ocean color products provided by the Geostationary Ocean Color Imager (GOCI) and...on the SuomiNational Polar-Orbiting Partner- ship (S-NPP) satellite. The GOCI is the world’s first geostationary orbit satellite sensor over the...used to extract the near-surface currents by the MCC algorithm. We not only demonstrate the retrieval of currents from the geostationary satellite ocean
Online aptitude automatic surface quality inspection system for hot rolled strips steel
NASA Astrophysics Data System (ADS)
Lin, Jin; Xie, Zhi-jiang; Wang, Xue; Sun, Nan-Nan
2005-12-01
Defects on the surface of hot rolled steel strips are main factors to evaluate quality of steel strips, an improved image recognition algorithm are used to extract the feature of Defects on the surface of steel strips. Base on the Machine vision and Artificial Neural Networks, establish a defect recognition method to select defect on the surface of steel strips. Base on these research. A surface inspection system and advanced algorithms for image processing to hot rolled strips is developed. Preparing two different fashion to lighting, adopting line blast vidicon of CCD on the surface steel strips on-line. Opening up capacity-diagnose-system with level the surface of steel strips on line, toward the above and undersurface of steel strips with ferric oxide, injure, stamp etc of defects on the surface to analyze and estimate. Miscarriage of justice and alternate of justice rate not preponderate over 5%.Geting hold of applications on some big enterprises of steel at home. Experiment proved that this measure is feasible and effective.
Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S
2017-11-01
Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.
An investigation of the observability of ocean-surface parameters using GEOS-3 backscatter data
NASA Technical Reports Server (NTRS)
Miller, L. S.; Priester, R. W.
1978-01-01
The degree to which ocean surface roughness can be synoptically observed through use of the information extracted from the GEOS-3 backscattered waveform data was evaluated. Algorithms are given for use in estimating the radar sensed waveheight distribution or ocean-surface impulse response. Other factors discussed include comparisons between theoretical and experimental radar cross section values, sea state bias effects, spatial variability of significant waveheight data, and sensor-related considerations.
NASA Astrophysics Data System (ADS)
Wu, Xue; Chang, Zhidong; Liu, Yao; Choe, Chol Ryong
2017-12-01
Solvent-extraction is widely used in chemical industry. Due to the amphiphilic character, a large amount of extractant remains in water phase, which causes not only loss of reagent, but also secondary contamination in water phase. Novel fluorinated extractants with ultra-low solubility in water were regarded as effective choice to reduce extractant loss in aqueous phase. However, trace amount of extractant still remained in water. Based on the high tensioactive aptitude of fluorinated solvent, flotation was applied to separate fluorinated extractant remaining in raffinate. According to the data of surface tension measurement, the surface tension of solution was obviously decreased with the addition of fluorinated extractant tris(2,2,3,3,4,4,5,5-octafluoropentyl) phosphate (FTAP). After flotation, the FTAP dissolved in water can be removed as much as 70%, which proved the feasibility of this key idea. The effects of operation time, gas velocity, pH and salinity of bulk solution on flotation performance were discussed. The optimum operating parameters were determined as gas velocity of 12ml/min, operating time of 15min, pH of 8.7, and NaCl volume concentration of 1.5%, respectively. Moreover, adsorption process of FTAP on bubble surface was simulated by ANSYS VOF model using SIMPLE algorithm. The dynamic mechanism of flotation was also theoretically investigated, which can be considered as supplement to the experimental results.
MorphoGraphX: A platform for quantifying morphogenesis in 4D.
Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne H K; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S
2015-05-06
Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.
NASA Astrophysics Data System (ADS)
Noh, M. J.; Howat, I. M.
2017-12-01
Glaciers and ice sheets are changing rapidly. Digital Elevation Models (DEMs) and Velocity Maps (VMs) obtained from repeat satellite imagery provide critical measurements of changes in glacier dynamics and mass balance over large, remote areas. DEMs created from stereopairs obtained during the same satellite pass through sensor re-pointing (i.e. "in-track stereo") have been most commonly used. In-track stereo has the advantage of minimizing the time separation and, thus, surface motion between image acquisitions, so that the ice surface can be assumed motionless in when collocating pixels between image pairs. Since the DEM extraction process assumes that all motion between collocated pixels is due to parallax or sensor model error, significant ice motion results in DEM quality loss or failure. In-track stereo, however, puts a greater demand on satellite tasking resources and, therefore, is much less abundant than single-scan imagery. Thus, if ice surface motion can be mitigated, the ability to extract surface elevation measurements from pairs of repeat single-scan "cross-track" imagery would greatly increase the extent and temporal resolution of ice surface change. Additionally, the ice motion measured by the DEM extraction process would itself provide a useful velocity measurement. We develop a novel algorithm for generating high-quality DEMs and VMs from cross-track image pairs without any prior information using the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm and its sensor model bias correction capabilities. Using a test suite of repeat, single-scan imagery from WorldView and QuickBird sensors collected over fast-moving outlet glaciers, we develop a method by which RPC biases between images are first calculated and removed over ice-free surfaces. Subpixel displacements over the ice are then constrained and used to correct the parallax estimate. Initial tests yield DEM results with the same quality as in-track stereo for cases where snowfall has not occurred between the two images and when the images have similar ground sample distances. The resulting velocity map also closely matches independent measurements.
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
Krůček, Martin; Vrška, Tomáš; Král, Kamil
2017-01-01
Terrestrial laser scanning is a powerful technology for capturing the three-dimensional structure of forests with a high level of detail and accuracy. Over the last decade, many algorithms have been developed to extract various tree parameters from terrestrial laser scanning data. Here we present 3D Forest, an open-source non-platform-specific software application with an easy-to-use graphical user interface with the compilation of algorithms focused on the forest environment and extraction of tree parameters. The current version (0.42) extracts important parameters of forest structure from the terrestrial laser scanning data, such as stem positions (X, Y, Z), tree heights, diameters at breast height (DBH), as well as more advanced parameters such as tree planar projections, stem profiles or detailed crown parameters including convex and concave crown surface and volume. Moreover, 3D Forest provides quantitative measures of between-crown interactions and their real arrangement in 3D space. 3D Forest also includes an original algorithm of automatic tree segmentation and crown segmentation. Comparison with field data measurements showed no significant difference in measuring DBH or tree height using 3D Forest, although for DBH only the Randomized Hough Transform algorithm proved to be sufficiently resistant to noise and provided results comparable to traditional field measurements. PMID:28472167
Determining Surface Roughness in Urban Areas Using Lidar Data
NASA Technical Reports Server (NTRS)
Holland, Donald
2009-01-01
An automated procedure has been developed to derive relevant factors, which can increase the ability to produce objective, repeatable methods for determining aerodynamic surface roughness. Aerodynamic surface roughness is used for many applications, like atmospheric dispersive models and wind-damage models. For this technique, existing lidar data was used that was originally collected for terrain analysis, and demonstrated that surface roughness values can be automatically derived, and then subsequently utilized in disaster-management and homeland security models. The developed lidar-processing algorithm effectively distinguishes buildings from trees and characterizes their size, density, orientation, and spacing (see figure); all of these variables are parameters that are required to calculate the estimated surface roughness for a specified area. By using this algorithm, aerodynamic surface roughness values in urban areas can then be extracted automatically. The user can also adjust the algorithm for local conditions and lidar characteristics, like summer/winter vegetation and dense/sparse lidar point spacing. Additionally, the user can also survey variations in surface roughness that occurs due to wind direction; for example, during a hurricane, when wind direction can change dramatically, this variable can be extremely significant. In its current state, the algorithm calculates an estimated surface roughness for a square kilometer area; techniques using the lidar data to calculate the surface roughness for a point, whereby only roughness elements that are upstream from the point of interest are used and the wind direction is a vital concern, are being investigated. This technological advancement will improve the reliability and accuracy of models that use and incorporate surface roughness.
NASA Astrophysics Data System (ADS)
Han, Xu; Xie, Guangping; Laflen, Brandon; Jia, Ming; Song, Guiju; Harding, Kevin G.
2015-05-01
In the real application environment of field engineering, a large variety of metrology tools are required by the technician to inspect part profile features. However, some of these tools are burdensome and only address a sole application or measurement. In other cases, standard tools lack the capability of accessing irregular profile features. Customers of field engineering want the next generation metrology devices to have the ability to replace the many current tools with one single device. This paper will describe a method based on the ring optical gage concept to the measurement of numerous kinds of profile features useful for the field technician. The ring optical system is composed of a collimated laser, a conical mirror and a CCD camera. To be useful for a wide range of applications, the ring optical system requires profile feature extraction algorithms and data manipulation directed toward real world applications in field operation. The paper will discuss such practical applications as measuring the non-ideal round hole with both off-centered and oblique axes. The algorithms needed to analyze other features such as measuring the width of gaps, radius of transition fillets, fall of step surfaces, and surface parallelism will also be discussed in this paper. With the assistance of image processing and geometric algorithms, these features can be extracted with a reasonable performance. Tailoring the feature extraction analysis to this specific gage offers the potential for a wider application base beyond simple inner diameter measurements. The paper will present experimental results that are compared with standard gages to prove the performance and feasibility of the analysis in real world field engineering. Potential accuracy improvement methods, a new dual ring design and future work will be discussed at the end of this paper.
NASA Astrophysics Data System (ADS)
D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.
2017-12-01
Machine-learning achieved unprecedented results in high-dimensional data processing tasks with wide applications in various fields. Due to the growing number of complex nonlinear systems that have to be investigated in science and the bare raw size of data nowadays available, ML offers the unique ability to extract knowledge, regardless the specific application field. Examples are image segmentation, supervised/unsupervised/ semi-supervised classification, feature extraction, data dimensionality analysis/reduction.The MASCS instrument has mapped Mercury surface in the 400-1145 nm wavelength range during orbital observations by the MESSENGER spacecraft. We have conducted k-means unsupervised hierarchical clustering to identify and characterize spectral units from MASCS observations. The results display a dichotomy: a polar and equatorial units, possibly linked to compositional differences or weathering due to irradiation. To explore possible relations between composition and spectral behavior, we have compared the spectral provinces with elemental abundance maps derived from MESSENGER's X-Ray Spectrometer (XRS).For the Vesta application on DAWN Visible and infrared spectrometer (VIR) data, we explored several Machine Learning techniques: image segmentation method, stream algorithm and hierarchical clustering.The algorithm successfully separates the Olivine outcrops around two craters on Vesta's surface [1]. New maps summarizing the spectral and chemical signature of the surface could be automatically produced.We conclude that instead of hand digging in data, scientist could choose a subset of algorithms with well known feature (i.e. efficacy on the particular problem, speed, accuracy) and focus their effort in understanding what important characteristic of the groups found in the data mean. [1] E Ammannito et al. "Olivine in an unexpected location on Vesta's surface". In: Nature 504.7478 (2013), pp. 122-125.
SST algorithm based on radiative transfer model
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui
2001-03-01
An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery
NASA Astrophysics Data System (ADS)
Sheng, Yongwei
2000-12-01
Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the canopy surface of a dense redwood stand using tri-ocular high-resolution images scanned from 1:2,400 aerial photographs. The results demonstrate the approach's ability to reconstruct complicated stands. The model-based approach proposed in this thesis is potentially applicable to other surfaces recovering problems with a priori knowledge about objects.
NASA Astrophysics Data System (ADS)
Parise, M.
2018-01-01
A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.
Li, Xumeng; Wang, Xiaohui; Wei, Hailin; Zhu, Xinguang; Peng, Yulin; Li, Ming; Li, Tao; Huang, Huang
2017-01-01
This study developed a technique system for the measurement, reconstruction, and trait extraction of rice canopy architectures, which have challenged functional–structural plant modeling for decades and have become the foundation of the design of ideo-plant architectures. The system uses the location-separation-measurement method (LSMM) for the collection of data on the canopy architecture and the analytic geometry method for the reconstruction and visualization of the three-dimensional (3D) digital architecture of the rice plant. It also uses the virtual clipping method for extracting the key traits of the canopy architecture such as the leaf area, inclination, and azimuth distribution in spatial coordinates. To establish the technique system, we developed (i) simple tools to measure the spatial position of the stem axis and azimuth of the leaf midrib and to capture images of tillers and leaves; (ii) computer software programs for extracting data on stem diameter, leaf nodes, and leaf midrib curves from the tiller images and data on leaf length, width, and shape from the leaf images; (iii) a database of digital architectures that stores the measured data and facilitates the reconstruction of the 3D visual architecture and the extraction of architectural traits; and (iv) computation algorithms for virtual clipping to stratify the rice canopy, to extend the stratified surface from the horizontal plane to a general curved surface (including a cylindrical surface), and to implement in silico. Each component of the technique system was quantitatively validated and visually compared to images, and the sensitivity of the virtual clipping algorithms was analyzed. This technique is inexpensive and accurate and provides high throughput for the measurement, reconstruction, and trait extraction of rice canopy architectures. The technique provides a more practical method of data collection to serve functional–structural plant models of rice and for the optimization of rice canopy types. Moreover, the technique can be easily adapted for other cereal crops such as wheat, which has numerous stems and leaves sheltering each other. PMID:28558045
Geometric modeling of subcellular structures, organelles, and multiprotein complexes
Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei
2013-01-01
SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797
NASA Astrophysics Data System (ADS)
Pinales, J. C.; Graber, H. C.; Hargrove, J. T.; Caruso, M. J.
2016-02-01
Previous studies have demonstrated the ability to detect and classify marine hydrocarbon films with spaceborne synthetic aperture radar (SAR) imagery. The dampening effects of hydrocarbon discharges on small surface capillary-gravity waves renders the ocean surface "radar dark" compared with the standard wind-borne ocean surfaces. Given the scope and impact of events like the Deepwater Horizon oil spill, the need for improved, automated and expedient monitoring of hydrocarbon-related marine anomalies has become a pressing and complex issue for governments and the extraction industry. The research presented here describes the development, training, and utilization of an algorithm that detects marine oil spills in an automated, semi-supervised manner, utilizing X-, C-, or L-band SAR data as the primary input. Ancillary datasets include related radar-borne variables (incidence angle, etc.), environmental data (wind speed, etc.) and textural descriptors. Shapefiles produced by an experienced human-analyst served as targets (validation) during the training portion of the investigation. Training and testing datasets were chosen for development and assessment of algorithm effectiveness as well as optimal conditions for oil detection in SAR data. The algorithm detects oil spills by following a 3-step methodology: object detection, feature extraction, and classification. Previous oil spill detection and classification methodologies such as machine learning algorithms, artificial neural networks (ANN), and multivariate classification methods like partial least squares-discriminant analysis (PLS-DA) are evaluated and compared. Statistical, transform, and model-based image texture techniques, commonly used for object mapping directly or as inputs for more complex methodologies, are explored to determine optimal textures for an oil spill detection system. The influence of the ancillary variables is explored, with a particular focus on the role of strong vs. weak wind forcing.
A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry
NASA Astrophysics Data System (ADS)
Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli
2015-03-01
Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (<12 m) bathymetry. However, one disadvantage of such systems is the lack of near-infrared and Raman channels, which results in difficulties in extracting the water surface. Therefore, the choice of a suitable waveform processing method is extremely important to guarantee the accuracy of the bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.
NASA Astrophysics Data System (ADS)
Y Yang, M.; Wang, J.; Zhang, Q.
2017-07-01
Vegetation coverage is one of the most important indicators for ecological environment change, and is also an effective index for the assessment of land degradation and desertification. The dry-hot valley regions have sparse surface vegetation, and the spectral information about the vegetation in such regions usually has a weak representation in remote sensing, so there are considerable limitations for applying the commonly-used vegetation index method to calculate the vegetation coverage in the dry-hot valley regions. Therefore, in this paper, Alternating Angle Minimum (AAM) algorithm of deterministic model is adopted for selective endmember for pixel unmixing of MODIS image in order to extract the vegetation coverage, and accuracy test is carried out by the use of the Landsat TM image over the same period. As shown by the results, in the dry-hot valley regions with sparse vegetation, AAM model has a high unmixing accuracy, and the extracted vegetation coverage is close to the actual situation, so it is promising to apply the AAM model to the extraction of vegetation coverage in the dry-hot valley regions.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Artificial bee colony algorithm for single-trial electroencephalogram analysis.
Hsu, Wei-Yen; Hu, Ya-Ping
2015-04-01
In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.
Determination of the Earth's Plasmapause Location from the CE-3 EUVC Images
NASA Technical Reports Server (NTRS)
He, Fei; Zhang, Xiao-Xin; Chen, Bo; Fok, Mei-Ching; Nakano, Shinya
2016-01-01
The Moon-based Extreme Ultraviolet Camera (EUVC) aboard China's Chang'e-3 (CE-3) mission has successfully imaged the entire Earth's plasmasphere for the first time from the side views on lunar surface. An EUVC image on 21 April 2014 is used in this study to demonstrate the characteristics and configurations of the Moon-based EUV imaging and to illustrate the determination algorithm of the plasmapause locations on the magnetic equator. The plasmapause locations determined from all the available EUVC images with the Minimum L Algorithm are quantitatively compared with those extracted from insitu observations (Defense Meteorological Satellite Program, Time History of Events and Macroscale Interactions during Substorms, and Radiation Belt Storm Probes). Excellent agreement between the determined plasmapauses seen by EUVC and the extracted ones from other satellites indicates the reliability of the Moon-based EUVC images as well as the determination algorithm. This preliminary study provides an important basis for future investigation of the dynamics of the plasmasphere with the Moon-based EUVC imaging.
New DTM Extraction Approach from Airborne Images Derived Dsm
NASA Astrophysics Data System (ADS)
Mousa, Y. A.; Helmholz, P.; Belton, D.
2017-05-01
In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.
Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero
2008-09-01
Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs
NASA Astrophysics Data System (ADS)
Byun, Jongmin; Seong, Yeong Bae
2015-08-01
Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs built using the ArcGIS™. The algorithm proposed here, therefore, could aid all the studies requiring more reliable stream paths and SLPs from DEMs.
MorphoGraphX: A platform for quantifying morphogenesis in 4D
Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne HK; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S
2015-01-01
Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX (www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth. DOI: http://dx.doi.org/10.7554/eLife.05864.001 PMID:25946108
Surface registration technique for close-range mapping applications
NASA Astrophysics Data System (ADS)
Habib, Ayman F.; Cheng, Rita W. T.
2006-08-01
Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.
NASA Astrophysics Data System (ADS)
Hatayama, Ken; Fujiwara, Hiroyuki
1998-05-01
This paper aims to present a new method to calculate surface waves in 3-D sedimentary basin models, based on the direct boundary element method (BEM) with vertical boundaries and normal modes, and to evaluate the excitation of secondary surface waves observed remarkably in basins. Many authors have so far developed numerical techniques to calculate the total 3-D wavefield. However, the calculation of the total wavefield does not match our purpose, because the secondary surface waves excited on the basin boundaries will be contaminated by other undesirable waves. In this paper, we prove that, in principle, it is possible to extract surface waves excited on part of the basin boundaries from the total 3-D wavefield with a formulation that uses the reflection and transmission operators defined in the space domain. In realizing this extraction in the BEM algorithm, we encounter the problem arising from the lateral and vertical truncations of boundary surfaces extending infinitely in the half-space. To compensate the truncations, we first introduce an approximate algorithm using 2.5-D and 1-D wavefields for reference media, where a 2.5-D wavefield means a 3-D wavefield with a 2-D subsurface structure, and we then demonstrate the extraction. Finally, we calculate the secondary surface waves excited on the arc shape (horizontal section) of a vertical basin boundary subject to incident SH and SV plane waves propagating perpendicularly to the chord of the arc. As a result, we find that in the SH-incident case the Love waves are predominantly excited, rather than the Rayleigh waves and that in the SV-wave incident case the Love waves as well as the Rayleigh waves are excited. This suggests that the Love waves are more detectable than the Rayleigh waves in the horizontal components of observed recordings.
Analysis and suppression of passive noise in surface microseismic data
NASA Astrophysics Data System (ADS)
Forghani-Arani, Farnoush
Surface microseismic surveys are gaining popularity in monitoring the hydraulic fracturing process. The effectiveness of these surveys, however, is strongly dependent on the signal-to-noise ratio of the acquired data. Cultural and industrial noise generated during hydraulic fracturing operations usually dominate the data, thereby decreasing the effectiveness of using these data in identifying and locating microseismic events. Hence, noise suppression is a critical step in surface microseismic monitoring. In this thesis, I focus on two important aspects in using surface-recorded microseismic seismic data: first, I take advantage of the unwanted surface noise to understand the characteristics of these noise and extract information about the propagation medium from the noise; second, I propose effective techniques to suppress the surface noise while preserving the waveforms that contain information about the source of microseisms. Automated event identification on passive seismic data using only a few receivers is challenging especially when the record lengths span over long durations of time. I introduce an automatic event identification algorithm that is designed specifically for detecting events in passive data acquired with a small number of receivers. I demonstrate that the conventional STA/LTA (Short-term Average/Long-term Average) algorithm is not sufficiently effective in event detection in the common case of low signal-to-noise ratio. With a cross-correlation based method as an extension of the STA/LTA algorithm, even low signal-to-noise events (that were not detectable with conventional STA/LTA) were revealed. Surface microseismic data contains surface-waves (generated primarily from hydraulic fracturing activities) and body-waves in the form of microseismic events. It is challenging to analyze the surface-waves on the recorded data directly because of the randomness of their source and their unknown source signatures. I use seismic interferometry to extract the surface-wave arrivals. Interferometry is a powerful tool to extract waves (including body-wave and surface-waves) that propagate from any receiver in the array (called a pseudo source) to the other receivers across the array. Since most of the noise sources in surface microseismic data lie on the surface, seismic interferometry yields pseudo source gathers dominated by surface-wave energy. The dispersive characteristics of these surface-waves are important properties that can be used to extract information necessary for suppressing these waves. I demonstrate the application of interferometry to surface passive data recorded during the hydraulic fracturing operation of a tight gas reservoir and extract the dispersion properties of surface-waves corresponding to a pseudo-shot gather. Comparison of the dispersion characteristics of the surface waves from the pseudo-shot gather with that of an active shot-gather shows interesting similarities and differences. The dispersion character (e.g. velocity change with frequency) of the fundamental mode was observed to have the same behavior for both the active and passive data. However, for the higher mode surface-waves, the dispersion properties are extracted at different frequency ranges. Conventional noise suppression techniques in passive data are mostly stacking-based that rely on enforcing the amplitude of the signal by stacking the waveforms at the receivers and are unable to preserve the waveforms at the individual receivers necessary for estimating the microseismic source location and source mechanism. Here, I introduce a technique based on the tau - p transform, that effectively identifies and separates microseismic events from surface-wave noise in the tau -p domain. This technique is superior to conventional stacking-based noise suppression techniques, because it preserves the waveforms at individual receivers. Application of this methodology to microseismic events with isotropic and double-couple source mechanism, show substantial improvement in the signal-to-noise ratio. Imaging of the processed field data also show improved imaging of the hypocenter location of the microseismic source. In the case of double-couple source mechanism, I suggest two approaches for unifying the polarities at the receivers, a cross-correlation approach and a semblance-based prediction approach. The semblance-based approach is more effective at unifying the polarities, especially for low signal-to-noise ratio data.
NASA Astrophysics Data System (ADS)
Noh, M. J.; Howat, I. M.; Porter, C. C.; Willis, M. J.; Morin, P. J.
2016-12-01
The Arctic is undergoing rapid change associated with climate warming. Digital Elevation Models (DEMs) provide critical information for change measurement and infrastructure planning in this vulnerable region, yet the existing quality and coverage of DEMs in the Arctic is poor. Low contrast and repeatedly-textured surfaces, such as snow and glacial ice and mountain shadows, all common in the Arctic, challenge existing stereo-photogrammetric techniques. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible to the scientific community. To utilize these imagery for extracting DEMs at a large scale over glaciated and high latitude regions we developed the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the satellite rational polynomial coefficients (RPCs). Using SETSM, we have generated a large number of DEMs (> 100,000 scene pair) from WorldView, GeoEye and QuickBird stereo images collected by DigitalGlobe Inc. and archived by the Polar Geospatial Center (PGC) at the University of Minnesota through an academic licensing program maintained by the US National Geospatial-Intelligence Agency (NGA). SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM program, with the objective of generating high resolution (2-8m) topography for the entire Arctic landmass, including seamless DEM mosaics and repeat DEM strips for change detection. ArcticDEM is collaboration between multiple US universities, governmental agencies and private companies, as well as international partners assisting with quality control and registration. ArcticDEM is being produced using the petascale Blue Waters supercomputer at the National Center for Supercomputer Applications at the University of Illinois. In this paper, we introduce the SETSM algorithm and the processing system used for the ArcticDEM project, as well as provide notable examples of ArcticDEM products.
Daytime sea fog retrieval based on GOCI data: a case study over the Yellow Sea.
Yuan, Yibo; Qiu, Zhongfeng; Sun, Deyong; Wang, Shengqiang; Yue, Xiaoyuan
2016-01-25
In this paper, a new daytime sea fog detection algorithm has been developed by using Geostationary Ocean Color Imager (GOCI) data. Based on spectral analysis, differences in spectral characteristics were found over different underlying surfaces, which include land, sea, middle/high level clouds, stratus clouds and sea fog. Statistical analysis showed that the Rrc (412 nm) (Rayleigh Corrected Reflectance) of sea fog pixels is approximately 0.1-0.6. Similarly, various band combinations could be used to separate different surfaces. Therefore, three indices (SLDI, MCDI and BSI) were set to discern land/sea, middle/high level clouds and fog/stratus clouds, respectively, from which it was generally easy to extract fog pixels. The remote sensing algorithm was verified using coastal sounding data, which demonstrated that the algorithm had the ability to detect sea fog. The algorithm was then used to monitor an 8-hour sea fog event and the results were consistent with observational data from buoys data deployed near the Sheyang coast (121°E, 34°N). The goal of this study was to establish a daytime sea fog detection algorithm based on GOCI data, which shows promise for detecting fog separately from stratus.
NASA Astrophysics Data System (ADS)
Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves
2015-04-01
Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.
Huang, Jie; Shi, Tielin; Tang, Zirong; Zhu, Wei; Liao, Guanglan; Li, Xiaoping; Gong, Bo; Zhou, Tengyuan
2017-08-01
We propose a bi-objective optimization model for extracting optical fiber background from the measured surface-enhanced Raman spectroscopy (SERS) spectrum of the target sample in the application of fiber optic SERS. The model is built using curve fitting to resolve the SERS spectrum into several individual bands, and simultaneously matching some resolved bands with the measured background spectrum. The Pearson correlation coefficient is selected as the similarity index and its maximum value is pursued during the spectral matching process. An algorithm is proposed, programmed, and demonstrated successfully in extracting optical fiber background or fluorescence background from the measured SERS spectra of rhodamine 6G (R6G) and crystal violet (CV). The proposed model not only can be applied to remove optical fiber background or fluorescence background for SERS spectra, but also can be transferred to conventional Raman spectra recorded using fiber optic instrumentation.
Enhanced magneto-optical imaging of internal stresses in the removed surface layer
NASA Astrophysics Data System (ADS)
Agalidi, Yuriy; Kozhukhar, Pavlo; Levyi, Sergii; Turbin, Dmitriy
2015-10-01
The paper describes a software method of reconstructing the state of the removed surface layer by visualising internal stresses in the underlying layers of the sample. Such a problem typically needs to be solved as part of forensic investigation that aims to reveal original marking of a sample with removed surface layer. For example, one may be interested in serial numbers of weapons or vehicles that had the surface layer of metal removed from the number plate. Experimental results of studying gradient internal stress fields in ferromagnetic sample using the NDI method of magneto-optical imaging (MOI) are presented. Numerical modelling results of internal stresses enclosed in the surface marking region are analysed and compared to the experimental results of magneto-optical imaging (MOI). MOI correction algorithm intended for reconstructing internal stress fields in the removed surface layer by extracting stresses retained by the underlying layers is described. Limiting ratios between parameters of a marking font are defined for the considered correction algorithm. Enhanced recognition properties for hidden stresses left by marking symbols are experimentally verified and confirmed.
Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy
McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.
2010-01-01
Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369
Predicting nucleic acid binding interfaces from structural models of proteins
Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael
2011-01-01
The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared to patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. PMID:22086767
Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall
2017-01-01
A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).
NASA Astrophysics Data System (ADS)
Xue, D.; Yu, X.; Jia, S.; Chen, F.; Li, X.
2018-04-01
In this paper, sequence ALOS PALSAR data and airborne SAR data of L-band from June 5, 2008 to September 8, 2015 are used. Based on the research of SAR data preprocessing and core algorithms, such as geocode, registration, filtering, unwrapping and baseline estimation, the improved Goldstein filtering algorithm and the branch-cut path tracking algorithm are used to unwrap the phase. The DEM and surface deformation information of the experimental area were extracted. Combining SAR-specific geometry and differential interferometry, on the basis of composite analysis of multi-source images, a method of detecting landslide disaster combining coherence of SAR image is developed, which makes up for the deficiency of single SAR and optical remote sensing acquisition ability. Especially in bad weather and abnormal climate areas, the speed of disaster emergency and the accuracy of extraction are improved. It is found that the deformation in this area is greatly affected by faults, and there is a tendency of uplift in the southeast plain and western mountainous area, while in the southwest part of the mountain area there is a tendency to sink. This research result provides a basis for decision-making for local disaster prevention and control.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Zhen, X; Zhou, L
2014-06-15
Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, themore » algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, JangPyo; Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min
2014-04-15
Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results ofmore » 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart), performed with high accuracy and may be useful for clinical purposes.« less
Surface EMG signals based motion intent recognition using multi-layer ELM
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Qi, Lin; Wang, Xiao
2017-11-01
The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.
NASA Astrophysics Data System (ADS)
Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing
2016-10-01
Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.
Fringe pattern demodulation with a two-frame digital phase-locked loop algorithm.
Gdeisat, Munther A; Burton, David R; Lalor, Michael J
2002-09-10
A novel technique called a two-frame digital phase-locked loop for fringe pattern demodulation is presented. In this scheme, two fringe patterns with different spatial carrier frequencies are grabbed for an object. A digital phase-locked loop algorithm tracks and demodulates the phase difference between both fringe patterns by employing the wrapped phase components of one of the fringe patterns as a reference to demodulate the second fringe pattern. The desired phase information can be extracted from the demodulated phase difference. We tested the algorithm experimentally using real fringe patterns. The technique is shown to be suitable for noncontact measurement of objects with rapid surface variations, and it outperforms the Fourier fringe analysis technique in this aspect. Phase maps produced withthis algorithm are noisy in comparison with phase maps generated with the Fourier fringe analysis technique.
The Comparison Between Nmf and Ica in Pigment Mixture Identification of Ancient Chinese Paintings
NASA Astrophysics Data System (ADS)
Liu, Y.; Lyu, S.; Hou, M.; Yin, Q.
2018-04-01
Since the colour in painting cultural relics observed by our naked eyes or hyperspectral cameras is usually a mixture of several kinds of pigments, the mixed pigments analysis will be an important subject in the field of ancient painting conservation and restoration. This paper aims to find a more effective method to confirm the types of every pure pigment from mixture on the surface of paintings. Firstly, we adopted two kinds of blind source separation algorithms, which are independent component analysis and non-negative matrix factorization, to extract the pure pigment component from mixed spectrum respectively. Moreover, we matched the separated pure spectrum with the pigments spectra library built by our team to determine the pigment type. Furthermore, three kinds of data including simulation data, mixed pigments spectral data measured in laboratory, and the spectral data of an ancient painting were chosen to evaluate the performance of the different algorithms. And the accuracy was compared between the two algorithms. Finally, the experimental results show that non-negative matrix factorization method is more suitable for endmember extraction in the field of ancient painting conservation and restoration.
NASA Astrophysics Data System (ADS)
Xie, Jing; Xu, Changhang; Chen, Guoming; Huang, Weiping
2018-06-01
Inductive thermography is one kind of infrared thermography (IRT) technique, which is effective in detection of front surface cracks in metal plates. However, rear surface cracks are usually missed due to their weak indications during inductive thermography. Here we propose a novel approach (AET: AE Thermography) to improve the visibility of rear surface cracks during inductive thermography by employing the Autoencoder (AE) algorithm, which is an important block to construct deep learning architectures. We construct an integrated framework for processing the raw inspection data of inductive thermography using the AE algorithm. Through this framework, underlying features of rear surface cracks are efficiently extracted and new clearer images are constructed. Experiments of inductive thermography were conducted on steel specimens to verify the efficacy of the proposed approach. We visually compare the raw thermograms, the empirical orthogonal functions (EOFs) of the prominent component thermography (PCT) technique and the results of AET. We further quantitatively evaluated AET by calculating crack contrast and signal-to-noise ratio (SNR). The results demonstrate that the proposed AET approach can remarkably improve the visibility of rear surface cracks and then improve the capability of inductive thermography in detecting rear surface cracks in metal plates.
Monitoring Antarctic ice sheet surface melting with TIMESAT algorithm
NASA Astrophysics Data System (ADS)
Ye, Y.; Cheng, X.; Li, X.; Liang, L.
2011-12-01
Antarctic ice sheet contributes significantly to the global heat budget by controlling the exchange of heat, moisture, and momentum at the surface-atmosphere interface, which directly influence the global atmospheric circulation and climate change. Ice sheet melting will cause snow humidity increase, which will accelerate the disintegration and movement of ice sheet. As a result, detecting Antarctic ice sheet melting is essential for global climate change research. In the past decades, various methods have been proposed for extracting snowmelt information from multi-channel satellite passive microwave data. Some methods are based on brightness temperature values or a composite index of them, and others are based on edge detection. TIMESAT (Time-series of Satellite sensor data) is an algorithm for extracting seasonality information from time-series of satellite sensor data. With TIMESAT long-time series brightness temperature (SSM/I 19H) is simulated by Double Logistic function. Snow is classified to wet and dry snow with generalized Gaussian model. The results were compared with those from a wavelet algorithm. On this basis, Antarctic automatic weather station data were used for ground verification. It shows that this algorithm is effective in ice sheet melting detection. The spatial distribution of melting areas(Fig.1) shows that, the majority of melting areas are located on the edge of Antarctic ice shelf region. It is affected by land cover type, surface elevation and geographic location (latitude). In addition, the Antarctic ice sheet melting varies with seasons. It is particularly acute in summer, peaking at December and January, staying low in March. In summary, from 1988 to 2008, Ross Ice Shelf and Ronnie Ice Shelf have the greatest interannual variability in amount of melting, which largely determines the overall interannual variability in Antarctica. Other regions, especially Larsen Ice Shelf and Wilkins Ice Shelf, which is in the Antarctic Peninsula region, have relative stable and consistent melt occurrence from year to year.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
NASA Astrophysics Data System (ADS)
Irsch, Kristina; Lee, Soohyun; Bose, Sanjukta N.; Kang, Jin U.
2018-02-01
We present an optical coherence tomography (OCT) imaging system that effectively compensates unwanted axial motion with micron-scale accuracy. The OCT system is based on a swept-source (SS) engine (1060-nm center wavelength, 100-nm full-width sweeping bandwidth, and 100-kHz repetition rate), with axial and lateral resolutions of about 4.5 and 8.5 microns respectively. The SS-OCT system incorporates a distance sensing method utilizing an envelope-based surface detection algorithm. The algorithm locates the target surface from the B-scans, taking into account not just the first or highest peak but the entire signature of sequential A-scans. Subsequently, a Kalman filter is applied as predictor to make up for system latencies, before sending the calculated position information to control a linear motor, adjusting and maintaining a fixed system-target distance. To test system performance, the motioncorrection algorithm was compared to earlier, more basic peak-based surface detection methods and to performing no motion compensation. Results demonstrate increased robustness and reproducibility, particularly noticeable in multilayered tissues, while utilizing the novel technique. Implementing such motion compensation into clinical OCT systems may thus improve the reliability of objective and quantitative information that can be extracted from OCT measurements.
[An improved algorithm for electrohysterogram envelope extraction].
Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia
2017-02-01
Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.
Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.
Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram
2012-01-01
In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.
Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging
NASA Astrophysics Data System (ADS)
Litkey, P.; Nurminen, K.; Honkavaara, E.
2013-05-01
The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.
Automated extraction of pleural effusion in three-dimensional thoracic CT images
NASA Astrophysics Data System (ADS)
Kido, Shoji; Tsunomori, Akinori
2009-02-01
It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.
Three Dimensional Urban Characterization by IFSAR Measurements
NASA Technical Reports Server (NTRS)
Gamba, P.; Houshmand, B.
1998-01-01
In this paper a machine vision approach is applied to Interferometric Synthetic Aperture Radars (IFSAR) data to extract the most relevant built structures in a dense urban environment. The algorithm tries to cluster primitives (line segments) into more complex surfaces (planes) to approximate the 3D shape of these objects. Very interesting results starting from TOPSAR data recorded over S, Monica are presented.
Contact-free palm-vein recognition based on local invariant features.
Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun
2014-01-01
Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach.
Component extraction on CT volumes of assembled products using geometric template matching
NASA Astrophysics Data System (ADS)
Muramatsu, Katsutoshi; Ohtake, Yutaka; Suzuki, Hiromasa; Nagai, Yukie
2017-03-01
As a method of non-destructive internal inspection, X-ray computed tomography (CT) is used not only in medical applications but also for product inspection. Some assembled products can be divided into separate components based on density, which is known to be approximately proportional to CT values. However, components whose densities are similar cannot be distinguished using the CT value driven approach. In this study, we proposed a new component extraction algorithm from the CT volume, using a set of voxels with an assigned CT value with the surface mesh as the template rather than the density. The method has two main stages: rough matching and fine matching. At the rough matching stage, the position of candidate targets is identified roughly from the CT volume, using the template of the target component. At the fine matching stage, these candidates are precisely matched with the templates, allowing the correct position of the components to be detected from the CT volume. The results of two computational experiments showed that the proposed algorithm is able to extract components with similar density within the assembled products on CT volumes.
Contact-Free Palm-Vein Recognition Based on Local Invariant Features
Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun
2014-01-01
Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach. PMID:24866176
The artificial object detection and current velocity measurement using SAR ocean surface images
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Strotov, Valery; Ershov, Maksim; Muraviev, Vadim; Feldman, Alexander; Smirnov, Sergey
2017-10-01
Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image processing can be used to solve many problems arising in this field of research. This paper discusses some of them including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps: image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm consists of the following main steps: prescreening, land masking, image segmentation combined with parameter measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is concluded that the proposed approaches can be used in maritime applications.
Changes to the COS Extraction Algorithm for Lifetime Position 3
NASA Astrophysics Data System (ADS)
Proffitt, Charles R.; Bostroem, K. Azalee; Ely, Justin; Foster, Deatrick; Hernandez, Svea; Hodge, Philip; Jedrzejewski, Robert I.; Lockwood, Sean A.; Massa, Derck; Peeples, Molly S.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Roman-Duval, Julia; Sana, Hugues; Sahnow, David J.; Sonnentrucker, Paule; Taylor, Joanna M.
2015-09-01
The COS FUV Detector Lifetime Position 3 (LP3) has been placed only 2.5" below the original lifetime position (LP1). This is sufficiently close to gain-sagged regions at LP1 that a revised extraction algorithm is needed to ensure good spectral quality. We provide an overview of this new "TWOZONE" extraction algorithm, discuss its strengths and limitations, describe new output columns in the X1D files that show the boundaries of the new extraction regions, and provide some advice on how to manually tune the algorithm for specialized applications.
[Application of genetic algorithm in blending technology for extractions of Cortex Fraxini].
Yang, Ming; Zhou, Yinmin; Chen, Jialei; Yu, Minying; Shi, Xiufeng; Gu, Xijun
2009-10-01
To explore the feasibility of genetic algorithm (GA) on multiple objective blending technology for extractions of Cortex Fraxini. According to that the optimization objective was the combination of fingerprint similarity and the root-mean-square error of multiple key constituents, a new multiple objective optimization model of 10 batches extractions of Cortex Fraxini was built. The blending coefficient was obtained by genetic algorithm. The quality of 10 batches extractions of Cortex Fraxini that after blending was evaluated with the finger print similarity and root-mean-square error as indexes. The quality of 10 batches extractions of Cortex Fraxini that after blending was well improved. Comparing with the fingerprint of the control sample, the similarity was up, but the degree of variation is down. The relative deviation of the key constituents was less than 10%. It is proved that genetic algorithm works well on multiple objective blending technology for extractions of Cortex Fraxini. This method can be a reference to control the quality of extractions of Cortex Fraxini. Genetic algorithm in blending technology for extractions of Chinese medicines is advisable.
Predicting nucleic acid binding interfaces from structural models of proteins.
Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael
2012-02-01
The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.
Novel imaging analysis system to measure the spatial dimension of engineered tissue construct.
Choi, Kyoung-Hwan; Yoo, Byung-Su; Park, So Ra; Choi, Byung Hyune; Min, Byoung-Hyun
2010-02-01
The measurement of the spatial dimensions of tissue-engineered constructs is very important for their clinical applications. In this study, a novel method to measure the volume of tissue-engineered constructs was developed using iterative mathematical computations. The method measures and analyzes three-dimensional (3D) parameters of a construct to estimate its actual volume using a sequence of software-based mathematical algorithms. The mathematical algorithm is composed of two stages: the shape extraction and the determination of volume. The shape extraction utilized 3D images of a construct: length, width, and thickness, captured by a high-quality camera with charge coupled device. The surface of the 3D images was then divided into fine sections. The area of each section was measured and combined to obtain the total surface area. The 3D volume of the target construct was then mathematically obtained using its total surface area and thickness. The accuracy of the measurement method was verified by comparing the results with those obtained from the hydrostatic weighing method (Korea Research Institute of Standards and Science [KRISS], Korea). The mean difference in volume between two methods was 0.0313 +/- 0.0003% (n = 5, P = 0.523) with no significant statistical difference. In conclusion, our image-based spatial measurement system is a reliable and easy method to obtain an accurate 3D volume of a tissue-engineered construct.
Model-based Bayesian signal extraction algorithm for peripheral nerves
NASA Astrophysics Data System (ADS)
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.
Detection of small surface defects using DCT based enhancement approach in machine vision systems
NASA Astrophysics Data System (ADS)
He, Fuqiang; Wang, Wen; Chen, Zichen
2005-12-01
Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus
2011-11-01
A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.
Research and implementation of finger-vein recognition algorithm
NASA Astrophysics Data System (ADS)
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
Marri, Kiran; Swaminathan, Ramakrishnan
2016-06-23
Muscle contractions can be categorized into isometric, isotonic (concentric and eccentric) and isokinetic contractions. The eccentric contractions are very effective for promoting muscle hypertrophy and produce larger forces when compared to the concentric or isometric contractions. Surface electromyography signals are widely used for analyzing muscle activities. These signals are nonstationary, nonlinear and exhibit self-similar multifractal behavior. The research on surface electromyography signals using multifractal analysis is not well established for concentric and eccentric contractions. In this study, an attempt has been made to analyze the concentric and eccentric contractions associated with biceps brachii muscles using surface electromyography signals and multifractal detrended moving average algorithm. Surface electromyography signals were recorded from 20 healthy individuals while performing a single curl exercise. The preprocessed signals were divided into concentric and eccentric cycles and in turn divided into phases based on range of motion: lower (0°-90°) and upper (>90°). The segments of surface electromyography signal were subjected to multifractal detrended moving average algorithm, and multifractal features such as strength of multifractality, peak exponent value, maximum exponent and exponent index were extracted in addition to conventional linear features such as root mean square and median frequency. The results show that surface electromyography signals exhibit multifractal behavior in both concentric and eccentric cycles. The mean strength of multifractality increased by 15% in eccentric contraction compared to concentric contraction. The lowest and highest exponent index values are observed in the upper concentric and lower eccentric contractions, respectively. The multifractal features are observed to be helpful in differentiating surface electromyography signals along the range of motion as compared to root mean square and median frequency. It appears that these multifractal features extracted from the concentric and eccentric contractions can be useful in the assessment of surface electromyography signals in sports medicine and training and also in rehabilitation programs. © IMechE 2016.
Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets
NASA Astrophysics Data System (ADS)
Bollmann, T. A.; Shank, R.
2017-12-01
During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.
Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan
2015-12-01
To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Mark William
2007-09-01
Gait or an individual's manner of walking, is one approach for recognizing people at a distance. Studies in psychophysics and medicine indicate that humans can recognize people by their gait and have found twenty-four different components to gait that taken together make it a unique signature. Besides not requiring close sensor contact, gait also does not necessarily require a cooperative subject. Using video data of people walking in different scenarios and environmental conditions we develop and test an algorithm that uses shape and motion to identify people from their gait. The algorithm uses dynamic time warping to match stored templatesmore » against an unknown sequence of silhouettes extracted from a person walking. While results under similar constraints and conditions are very good, the algorithm quickly degrades with varying conditions such as surface and clothing.« less
Automatic detection of Martian dark slope streaks by machine learning using HiRISE images
NASA Astrophysics Data System (ADS)
Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui
2017-07-01
Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.
NASA Astrophysics Data System (ADS)
Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.
2015-12-01
Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.
Vehicle Detection of Aerial Image Using TV-L1 Texture Decomposition
NASA Astrophysics Data System (ADS)
Wang, Y.; Wang, G.; Li, Y.; Huang, Y.
2016-06-01
Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi
2014-10-01
The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.
NASA Astrophysics Data System (ADS)
Chen, Lei; Li, Dehua; Yang, Jie
2007-12-01
Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.
Low complexity feature extraction for classification of harmonic signals
NASA Astrophysics Data System (ADS)
William, Peter E.
In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.
Recognition of fiducial marks applied to robotic systems. Thesis
NASA Technical Reports Server (NTRS)
Georges, Wayne D.
1991-01-01
The objective was to devise a method to determine the position and orientation of the links of a PUMA 560 using fiducial marks. As a result, it is necessary to design fiducial marks and a corresponding feature extraction algorithm. The marks used are composites of three basic shapes, a circle, an equilateral triangle and a square. Once a mark is imaged, it is thresholded and the borders of each shape are extracted. These borders are subsequently used in a feature extraction algorithm. Two feature extraction algorithms are used to determine which one produces the most reliable results. The first algorithm is based on moment invariants and the second is based on the discrete version of the psi-s curve of the boundary. The latter algorithm is clearly superior for this application.
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
Robert E. Kennedy; Zhiqiang Yang; Warren B. Cohen
2010-01-01
We introduce and test LandTrendr (Landsat-based detection of Trends in Disturbance and Recovery), a new approach to extract spectral trajectories of land surface change from yearly Landsat time-series stacks (LTS). The method brings together two themes in time-series analysis of LTS: capture of short-duration events and smoothing of long-term trends. Our strategy is...
NASA Astrophysics Data System (ADS)
Baillard, C.; Dissard, O.; Jamet, O.; Maître, H.
Above-ground analysis is a key point to the reconstruction of urban scenes, but it is a difficult task because of the diversity of the involved objects. We propose a new method to above-ground extraction from an aerial stereo pair, which does not require any assumption about object shape or nature. A Digital Surface Model is first produced by a stereoscopic matching stage preserving discontinuities, and then processed by a region-based Markovian classification algorithm. The produced above-ground areas are finally characterized as man-made or natural according to the grey level information. The quality of the results is assessed and discussed.
NASA Astrophysics Data System (ADS)
Lavigne, Daniel A.; Breton, Mélanie; Fournier, Georges; Charette, Jean-François; Pichette, Mario; Rivet, Vincent; Bernier, Anne-Pier
2011-10-01
Surveillance operations and search and rescue missions regularly exploit electro-optic imaging systems to detect targets of interest in both the civilian and military communities. By incorporating the polarization of light as supplementary information to such electro-optic imaging systems, it is possible to increase their target discrimination capabilities, considering that man-made objects are known to depolarized light in different manner than natural backgrounds. As it is known that electro-magnetic radiation emitted and reflected from a smooth surface observed near a grazing angle becomes partially polarized in the visible and infrared wavelength bands, additional information about the shape, roughness, shading, and surface temperatures of difficult targets can be extracted by processing effectively such reflected/emitted polarized signatures. This paper presents a set of polarimetric image processing algorithms devised to extract meaningful information from a broad range of man-made objects. Passive polarimetric signatures are acquired in the visible, shortwave infrared, midwave infrared, and longwave infrared bands using a fully automated imaging system developed at DRDC Valcartier. A fusion algorithm is used to enable the discrimination of some objects lying in shadowed areas. Performance metrics, derived from the computed Stokes parameters, characterize the degree of polarization of man-made objects. Field experiments conducted during winter and summer time demonstrate: 1) the utility of the imaging system to collect polarized signatures of different objects in the visible and infrared spectral bands, and 2) the enhanced performance of target discrimination and fusion algorithms to exploit the polarized signatures of man-made objects against cluttered backgrounds.
Neural network explanation using inversion.
Saad, Emad W; Wunsch, Donald C
2007-01-01
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.
Automatic extraction of blocks from 3D point clouds of fractured rock
NASA Astrophysics Data System (ADS)
Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen
2017-12-01
This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.
Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen
2016-06-01
High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.
In-flight photogrammetric camera calibration and validation via complementary lidar
NASA Astrophysics Data System (ADS)
Gneeniss, A. S.; Mills, J. P.; Miller, P. E.
2015-02-01
This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.
Versatile and efficient pore network extraction method using marker-based watershed segmentation
NASA Astrophysics Data System (ADS)
Gostick, Jeff T.
2017-08-01
Obtaining structural information from tomographic images of porous materials is a critical component of porous media research. Extracting pore networks is particularly valuable since it enables pore network modeling simulations which can be useful for a host of tasks from predicting transport properties to simulating performance of entire devices. This work reports an efficient algorithm for extracting networks using only standard image analysis techniques. The algorithm was applied to several standard porous materials ranging from sandstone to fibrous mats, and in all cases agreed very well with established or known values for pore and throat sizes, capillary pressure curves, and permeability. In the case of sandstone, the present algorithm was compared to the network obtained using the current state-of-the-art algorithm, and very good agreement was achieved. Most importantly, the network extracted from an image of fibrous media correctly predicted the anisotropic permeability tensor, demonstrating the critical ability to detect key structural features. The highly efficient algorithm allows extraction on fairly large images of 5003 voxels in just over 200 s. The ability for one algorithm to match materials as varied as sandstone with 20% porosity and fibrous media with 75% porosity is a significant advancement. The source code for this algorithm is provided.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modified kernel-based nonlinear feature extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Perkins, S. J.; Theiler, J. P.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determinedmore » by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.« less
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
NASA Astrophysics Data System (ADS)
Okyay, U.; Glennie, C. L.; Khan, S.
2017-12-01
Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
Multispectra CWT-based algorithm (MCWT) in mass spectra for peak extraction.
Hsueh, Huey-Miin; Kuo, Hsun-Chih; Tsai, Chen-An
2008-01-01
An important objective in mass spectrometry (MS) is to identify a set of biomarkers that can be used to potentially distinguish patients between distinct treatments (or conditions) from tens or hundreds of spectra. A common two-step approach involving peak extraction and quantification is employed to identify the features of scientific interest. The selected features are then used for further investigation to understand underlying biological mechanism of individual protein or for development of genomic biomarkers to early diagnosis. However, the use of inadequate or ineffective peak detection and peak alignment algorithms in peak extraction step may lead to a high rate of false positives. Also, it is crucial to reduce the false positive rate in detecting biomarkers from ten or hundreds of spectra. Here a new procedure is introduced for feature extraction in mass spectrometry data that extends the continuous wavelet transform-based (CWT-based) algorithm to multiple spectra. The proposed multispectra CWT-based algorithm (MCWT) not only can perform peak detection for multiple spectra but also carry out peak alignment at the same time. The author' MCWT algorithm constructs a reference, which integrates information of multiple raw spectra, for feature extraction. The algorithm is applied to a SELDI-TOF mass spectra data set provided by CAMDA 2006 with known polypeptide m/z positions. This new approach is easy to implement and it outperforms the existing peak extraction method from the Bioconductor PROcess package.
Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min
2012-04-30
Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.
Shang, Yu; Yu, Guoqiang
2014-09-29
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a N th-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD B ). The purpose of this study is to extend the capability of the N th-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD B in the brain layer with a step decrement of 10% while maintaining αD B values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order ( N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The N th-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
A General, Adaptive, Roadmap-Based Algorithm for Protein Motion Computation.
Molloy, Kevin; Shehu, Amarda
2016-03-01
Precious information on protein function can be extracted from a detailed characterization of protein equilibrium dynamics. This remains elusive in wet and dry laboratories, as function-modulating transitions of a protein between functionally-relevant, thermodynamically-stable and meta-stable structural states often span disparate time scales. In this paper we propose a novel, robotics-inspired algorithm that circumvents time-scale challenges by drawing analogies between protein motion and robot motion. The algorithm adapts the popular roadmap-based framework in robot motion computation to handle the more complex protein conformation space and its underlying rugged energy surface. Given known structures representing stable and meta-stable states of a protein, the algorithm yields a time- and energy-prioritized list of transition paths between the structures, with each path represented as a series of conformations. The algorithm balances computational resources between a global search aimed at obtaining a global view of the network of protein conformations and their connectivity and a detailed local search focused on realizing such connections with physically-realistic models. Promising results are presented on a variety of proteins that demonstrate the general utility of the algorithm and its capability to improve the state of the art without employing system-specific insight.
PDF text classification to leverage information extraction from publication reports.
Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha
2016-06-01
Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (p<0.001) higher than the best performing machine learning classifier that used a logistic regression algorithm. F-measure improvements were observed in the classification of TITLE (+15.6%), ABSTRACT (+54.2%), BODYTEXT (+3.7%), SEMISTRUCTURE (+34%), and MEDADATA (+14.2%). In addition, use of the algorithm to filter semi-structured texts and publication metadata improved performance of the outcome extraction system (F-measure +4.1%, p=0.002). It also reduced of number of sentences to be processed by 44.9% (p<0.001), which corresponds to a processing time reduction of 50% (p=0.005). The rule-based multi-pass sieve framework can be used effectively in categorizing texts extracted from PDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.
An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.
Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D
2016-05-01
Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms.
NASA Astrophysics Data System (ADS)
Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco
2013-05-01
The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.
Automatic extraction of building boundaries using aerial LiDAR data
NASA Astrophysics Data System (ADS)
Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian
2016-01-01
Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.
Exploiting spectral content for image segmentation in GPR data
NASA Astrophysics Data System (ADS)
Wang, Patrick K.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.
2011-06-01
Ground-penetrating radar (GPR) sensors provide an effective means for detecting changes in the sub-surface electrical properties of soils, such as changes indicative of landmines or other buried threats. However, most GPR-based pre-screening algorithms only localize target responses along the surface of the earth, and do not provide information regarding an object's position in depth. As a result, feature extraction algorithms are forced to process data from entire cubes of data around pre-screener alarms, which can reduce feature fidelity and hamper performance. In this work, spectral analysis is investigated as a method for locating subsurface anomalies in GPR data. In particular, a 2-D spatial/frequency decomposition is applied to pre-screener flagged GPR B-scans. Analysis of these spatial/frequency regions suggests that aspects (e.g. moments, maxima, mode) of the frequency distribution of GPR energy can be indicative of the presence of target responses. After translating a GPR image to a function of the spatial/frequency distributions at each pixel, several image segmentation approaches can be applied to perform segmentation in this new transformed feature space. To illustrate the efficacy of the approach, a performance comparison between feature processing with and without the image segmentation algorithm is provided.
Localization of Pathology on Complex Architecture Building Surfaces
NASA Astrophysics Data System (ADS)
Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.
2017-02-01
The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.
3D Simulation Modeling of the Tooth Wear Process.
Dai, Ning; Hu, Jian; Liu, Hao
2015-01-01
Severe tooth wear is the most common non-caries dental disease, and it can seriously affect oral health. Studying the tooth wear process is time-consuming and difficult, and technological tools are frequently lacking. This paper presents a novel method of digital simulation modeling that represents a new way to study tooth wear. First, a feature extraction algorithm is used to obtain anatomical feature points of the tooth without attrition. Second, after the alignment of non-attrition areas, the initial homogeneous surface is generated by means of the RBF (Radial Basic Function) implicit surface and then deformed to the final homogeneous by the contraction and bounding algorithm. Finally, the method of bilinear interpolation based on Laplacian coordinates between tooth with attrition and without attrition is used to inversely reconstruct the sequence of changes of the 3D tooth morphology during gradual tooth wear process. This method can also be used to generate a process simulation of nonlinear tooth wear by means of fitting an attrition curve to the statistical data of attrition index in a certain region. The effectiveness and efficiency of the attrition simulation algorithm are verified through experimental simulation.
3D Simulation Modeling of the Tooth Wear Process
Dai, Ning; Hu, Jian; Liu, Hao
2015-01-01
Severe tooth wear is the most common non-caries dental disease, and it can seriously affect oral health. Studying the tooth wear process is time-consuming and difficult, and technological tools are frequently lacking. This paper presents a novel method of digital simulation modeling that represents a new way to study tooth wear. First, a feature extraction algorithm is used to obtain anatomical feature points of the tooth without attrition. Second, after the alignment of non-attrition areas, the initial homogeneous surface is generated by means of the RBF (Radial Basic Function) implicit surface and then deformed to the final homogeneous by the contraction and bounding algorithm. Finally, the method of bilinear interpolation based on Laplacian coordinates between tooth with attrition and without attrition is used to inversely reconstruct the sequence of changes of the 3D tooth morphology during gradual tooth wear process. This method can also be used to generate a process simulation of nonlinear tooth wear by means of fitting an attrition curve to the statistical data of attrition index in a certain region. The effectiveness and efficiency of the attrition simulation algorithm are verified through experimental simulation. PMID:26241942
NASA Astrophysics Data System (ADS)
Wang, Fei; Liu, Junyan; Mohummad, Oliullah; Wang, Yang
2018-04-01
In this paper, truncated-correlation photothermal coherence tomography (TC-PCT) was used as a nondestructive inspection technique to evaluate glass-fiber reinforced polymer (GFRP) composite surface cracks. Chirped-pulsed signal that combines linear frequency modulation and pulse excitation was proposed as an excitation signal to detect GFRP composite surface cracks. The basic principle of TC-PCT and extraction algorithm of the thermal wave signal feature was described. The comparison experiments between lock-in thermography, thermal wave radar imaging and chirped-pulsed photothermal radar for detecting GFRP artificial surface cracks were carried out. Experimental results illustrated that chirped-pulsed photothermal radar has the merits of high signal-to-noise ratio in detecting GFRP composite surface cracks. TC-PCT as a depth-resolved photothermal imaging modality was employed to enable three-dimensional visualization of GFRP composite surface cracks. The results showed that TC-PCT can effectively evaluate the cracks depth of GFRP composite.
NASA Astrophysics Data System (ADS)
Campbell, B. D.; Higgins, S. R.
2008-12-01
Developing a method for bridging the gap between macroscopic and microscopic measurements of reaction kinetics at the mineral-water interface has important implications in geological and chemical fields. Investigating these reactions on the nanometer scale with SPM is often limited by image analysis and data extraction due to the large quantity of data usually obtained in SPM experiments. Here we present a computer algorithm for automated analysis of mineral-water interface reactions. This algorithm automates the analysis of sequential SPM images by identifying the kinetically active surface sites (i.e., step edges), and by tracking the displacement of these sites from image to image. The step edge positions in each image are readily identified and tracked through time by a standard edge detection algorithm followed by statistical analysis on the Hough Transform of the edge-mapped image. By quantifying this displacement as a function of time, the rate of step edge displacement is determined. Furthermore, the total edge length, also determined from analysis of the Hough Transform, combined with the computed step speed, yields the surface area normalized rate of the reaction. The algorithm was applied to a study of the spiral growth of the calcite(104) surface from supersaturated solutions, yielding results almost 20 times faster than performing this analysis by hand, with results being statistically similar for both analysis methods. This advance in analysis of kinetic data from SPM images will facilitate the building of experimental databases on the microscopic kinetics of mineral-water interface reactions.
Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira
2015-09-17
In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.
Landsat surface reflectance quality assurance extraction (version 1.7)
Jones, J.W.; Starbuck, M.J.; Jenkerson, Calli B.
2013-01-01
The U.S. Geological Survey (USGS) Land Remote Sensing Program is developing an operational capability to produce Climate Data Records (CDRs) and Essential Climate Variables (ECVs) from the Landsat Archive to support a wide variety of science and resource management activities from regional to global scale. The USGS Earth Resources Observation and Science (EROS) Center is charged with prototyping systems and software to generate these high-level data products. Various USGS Geographic Science Centers are charged with particular ECV algorithm development and (or) selection as well as the evaluation and application demonstration of various USGS CDRs and ECVs. Because it is a foundation for many other ECVs, the first CDR in development is the Landsat Surface Reflectance Product (LSRP). The LSRP incorporates data quality information in a bit-packed structure that is not readily accessible without postprocessing services performed by the user. This document describes two general methods of LSRP quality-data extraction for use in image processing systems. Helpful hints for the installation and use of software originally developed for manipulation of Hierarchical Data Format (HDF) produced through the National Aeronautics and Space Administration (NASA) Earth Observing System are first provided for users who wish to extract quality data into separate HDF files. Next, steps follow to incorporate these extracted data into an image processing system. Finally, an alternative example is illustrated in which the data are extracted within a particular image processing system.
Extracting hurricane eye morphology from spaceborne SAR images using morphological analysis
NASA Astrophysics Data System (ADS)
Lee, Isabella K.; Shamsoddini, Ali; Li, Xiaofeng; Trinder, John C.; Li, Zeyu
2016-07-01
Hurricanes are among the most destructive global natural disasters. Thus recognizing and extracting their morphology is important for understanding their dynamics. Conventional optical sensors, due to cloud cover associated with hurricanes, cannot reveal the intense air-sea interaction occurring at the sea surface. In contrast, the unique capabilities of spaceborne synthetic aperture radar (SAR) data for cloud penetration, and its backscattering signal characteristics enable the extraction of the sea surface roughness. Therefore, SAR images enable the measurement of the size and shape of hurricane eyes, which reveal their evolution and strength. In this study, using six SAR hurricane images, we have developed a mathematical morphology method for automatically extracting the hurricane eyes from C-band SAR data. Skeleton pruning based on discrete skeleton evolution (DSE) was used to ensure global and local preservation of the hurricane eye shape. This distance weighted algorithm applied in a hierarchical structure for extraction of the edges of the hurricane eyes, can effectively avoid segmentation errors by reducing redundant skeletons attributed to speckle noise along the edges of the hurricane eye. As a consequence, the skeleton pruning has been accomplished without deficiencies in the key hurricane eye skeletons. A morphology-based analyses of the subsequent reconstructions of the hurricane eyes shows a high degree of agreement with the hurricane eye areas derived from reference data based on NOAA manual work.
Design of a backlighting structure for very large-area luminaries
NASA Astrophysics Data System (ADS)
Carraro, L.; Mäyrä, A.; Simonetta, M.; Benetti, G.; Tramonte, A.; Benedetti, M.; Randone, E. M.; Ylisaukko-Oja, A.; Keränen, K.; Facchinetti, T.; Giuliani, G.
2017-02-01
A novel approach for RGB semiconductor LED-based backlighting system is developed to satisfy the requirements of the Project LUMENTILE funded by the European Commission, whose scope is to develop a luminous electronic tile that is foreseen to be manufactured in millions of square meters each year. This unconventionally large-area surface of uniform, high-brightness illumination requires a specific optical design to keep a low production cost, while maintaining high optical extraction efficiency and a reduced thickness of the structure, as imposed by architectural design constraints. The proposed solution is based on a light-guiding layer to be illuminated by LEDs in edge configuration, or in a planar arrangement. The light guiding slab is finished with a reflective top interface and a diffusive or reflective bottom interface/layer. Patterning is used for both the top interface (punctual removal of reflection and generation of a light scattering centers) and for the bottom layer (using dark/bright printed pattern). Computer-based optimization algorithms based on ray-tracing are used to find optimal solutions in terms of uniformity of illumination of the top surface and overall light extraction efficiency. Through a closed-loop optimization process, that assesses the illumination uniformity of the top surface, the algorithm generates the desired optimized top and bottom patterns, depending on the number of LED sources used, their geometry, and the thickness of the guiding layer. Specific low-cost technologies to realize the patterning are discussed, with the goal of keeping the production cost of these very large-area luminaries below the value of 100$/sqm.
NASA Astrophysics Data System (ADS)
Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu
2017-08-01
Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.
NASA Astrophysics Data System (ADS)
Yan, L.; Roy, D. P.
2013-12-01
The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning. To date, field objects have not been extracted from satellite data over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. We present a fully automated computational methodology to extract agricultural fields from 30m Web Enabled Landsat data (WELD) time series and results for approximately 250,000 square kilometers (eleven 150 x 150 km WELD tiles) encompassing all the major agricultural areas of California. The extracted fields, including rectangular, circular, and irregularly shaped fields, are evaluated by comparison with manually interpreted Landsat field objects. Validation results are presented in terms of standard confusion matrix accuracy measures and also the degree of field object over-segmentation, under-segmentation, fragmentation and shape distortion. The apparent success of the presented field extraction methodology is due to several factors. First, the use of multi-temporal Landsat data, as opposed to single Landsat acquisitions, that enables crop rotations and inter-annual variability in the state of the vegetation to be accommodated for and provides more opportunities for cloud-free, non-missing and atmospherically uncontaminated surface observations. Second, the adoption of an object based approach, namely the variational region-based geometric active contour method that enables robust segmentation with only a small number of parameters and that requires no training data collection. Third, the use of a watershed algorithm to decompose connected segments belonging to multiple fields into coherent isolated field segments and a geometry based algorithm to detect and associate parts of circular fields together. Fourth, masking of non-agricultural vegetation using a recent WELD 30m percent tree-cover product and a multi-temporal spectral-angle mapping based grass extraction methodology. Implications and recommendations for algorithm refinement and application to decadal conterminous United States WELD data are discussed.
Spatial analysis of fluvial terraces in GRASS GIS accessing R functionality
NASA Astrophysics Data System (ADS)
Józsa, Edina
2017-04-01
Terrace research along the Danube is a major topic of Hungarian traditional geomorphology because of the socio-economic role of terrace surfaces and their importance in paleo-environmental reconstructions. Semi-automated mapping of fluvial landforms from a coherent digital elevation dataset allow objective analysis of hydrogeomorphic characteristics with low time and cost requirements. New results obtained with unified GIS-based algorithms can be integrated with previous findings regarding landscape evolution. The complementary functionality of GRASS GIS and R provides the possibility to develop a flexible terrain analysing tool for the delineation and quantifiable analysis of terrace remnants. Using R as an intermediate analytical environment and visualisation tool gives great added value to the algorithm, while GRASS GIS is capable of handling the large digital elevation datasets and perform the demanding computations to prepare necessary raster derivatives (Bivand, R.S. et al. 2008). The proposed terrace mapping algorithm is based on the work of Demoulin, A. et al. (2007), but it is further improved in the form of GRASS GIS script tool accessing R functionality. In the first step the hydrogeomorphic signatures of the given study site are explored and the area is divided along clearly recognizable structural-morphological boundaries.The algorithm then cuts up the subregions into parallel sections in the flow direction and determines cells potentially belonging to terrace surfaces based on local slope characteristics and a minimum area size threshold. As a result an output report is created that contains a histogram of altitudes, a swath-profile of the landscape, scatter plots to represent the relation of the relative elevations and slope values in the analysed sections and a final plot showing the longitudinal profile of the river with the determined height ranges of terrace levels. The algorithm also produces a raster map of extracted terrace remnants. From this dataset it is possible to interpolate a new digital elevation model approximating the former terraced valley surface using the Ordinary Kriging method (Troiani, F. and Della Seta, M. 2011). The applicability of the algorithm was tested on the northern foreland of Gerecse Mountains, an antecedent valley section of the Danube, with terrace remnants expected in 6 to 8 altitude ranges. Methodological issues arising from determining the optimal threshold values were explored using an artificial hillslope model, while the terrace profiles and terrace-top surfaces raster generated from the digital elevation model were validated with the previous findings of traditional geomorphological surveys. This research was supported by the Human Capacities Grant Management Office and the Hungarian Ministry of Human Capacities in the framework of the NTP-NFTÖ-16 project. References: Bivand, R.S. et al. (2008). Applied Spatial Data Analysis with R. New York: Springer. 378 p. Demoulin, A. et al. (2007). An automated method to extract fluvial terraces from digital elevation models: The Vesdre valley, a case study in eastern Belgium. - Geomorphology 91 (1-2): 51-64. Troiani, E. and Della Seta, M. (2011). Geomorphological response of fluvial and coastal terraces to Quaternary tectonics and climate as revealed by geostatistical topographic analysis. - Earth Surface Processes and Landforms 36: 1193-1208.
Shahbeig, Saleh; Pourghassem, Hossein
2013-01-01
Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.
Analysis and Application of Lineaments Extraction Using GF-1 Satellite Images in Loess Covered
NASA Astrophysics Data System (ADS)
Han, L.; Liu, Z.; Zhao, Z.; Ning, Y.
2018-04-01
Faults, folds and other tectonics regions belong to the weak areas of geology, will form linear geomorphology as a result of erosion, which appears as lineaments on the earth surface. Lineaments control the distribution of regional formation, groundwater, and geothermal, etc., so it is an important indicator for the evaluation of the strength and stability of the geological structure. The current algorithms mostly are artificial visual interpretation and computer semi-automatic extraction, not only time-consuming, but labour-intensive. It is difficult to guarantee the accuracy due to the dependence on the expert's knowledge, experience, and the computer hardware and software. Therefore, an integrated algorithm is proposed based on the GF-1 satellite image data, taking the loess area in the northern part of Jinlinghe basin as an example. Firstly, the best bands with 4-3-2 composition is chosen using optimum index factor (OIF). Secondly, line edge is highlighted by Gaussian high-pass filter and tensor voting. Finally, the Hough Transform is used to detect the geologic lineaments. Thematic maps of geological structure in this area are mapped through the extraction of lineaments. The experimental results show that, influenced by the northern margin of Qinling Mountains and the declined Weihe Basin, the lineaments are mostly distributed over the terrain lines, and mainly in the NW, NE, NNE, and ENE directions. It provided a reliable basis for analysing tectonic stress trend because of the agreement with the existing regional geological survey. The algorithm is more practical and has higher robustness, less disturbed by human factors.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2007-03-01
The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.
NASA Astrophysics Data System (ADS)
Tang, Chaoqing; Tian, Gui Yun; Chen, Xiaotian; Wu, Jianbo; Li, Kongjing; Meng, Hongying
2017-12-01
Active thermography provides infrared images that contain sub-surface defect information, while visible images only reveal surface information. Mapping infrared information to visible images offers more comprehensive visualization for decision-making in rail inspection. However, the common information for registration is limited due to different modalities in both local and global level. For example, rail track which has low temperature contrast reveals rich details in visible images, but turns blurry in the infrared counterparts. This paper proposes a registration algorithm called Edge-Guided Speeded-Up-Robust-Features (EG-SURF) to address this issue. Rather than sequentially integrating local and global information in matching stage which suffered from buckets effect, this algorithm adaptively integrates local and global information into a descriptor to gather more common information before matching. This adaptability consists of two facets, an adaptable weighting factor between local and global information, and an adaptable main direction accuracy. The local information is extracted using SURF while the global information is represented by shape context from edges. Meanwhile, in shape context generation process, edges are weighted according to local scale and decomposed into bins using a vector decomposition manner to provide more accurate descriptor. The proposed algorithm is qualitatively and quantitatively validated using eddy current pulsed thermography scene in the experiments. In comparison with other algorithms, better performance has been achieved.
Three-dimensional digital mapping of the optic nerve head cupping in glaucoma
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Ramirez, Manuel; Morales, Jose
1992-08-01
Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.
A novel blinding digital watermark algorithm based on lab color space
NASA Astrophysics Data System (ADS)
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
A novel image retrieval algorithm based on PHOG and LSH
NASA Astrophysics Data System (ADS)
Wu, Hongliang; Wu, Weimin; Peng, Jiajin; Zhang, Junyuan
2017-08-01
PHOG can describe the local shape of the image and its relationship between the spaces. The using of PHOG algorithm to extract image features in image recognition and retrieval and other aspects have achieved good results. In recent years, locality sensitive hashing (LSH) algorithm has been superior to large-scale data in solving near-nearest neighbor problems compared with traditional algorithms. This paper presents a novel image retrieval algorithm based on PHOG and LSH. First, we use PHOG to extract the feature vector of the image, then use L different LSH hash table to reduce the dimension of PHOG texture to index values and map to different bucket, and finally extract the corresponding value of the image in the bucket for second image retrieval using Manhattan distance. This algorithm can adapt to the massive image retrieval, which ensures the high accuracy of the image retrieval and reduces the time complexity of the retrieval. This algorithm is of great significance.
Analysis of atomic force microscopy data for surface characterization using fuzzy logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Mousa, Amjed, E-mail: aalmousa@vt.edu; Niemann, Darrell L.; Niemann, Devin J.
2011-07-15
In this paper we present a methodology to characterize surface nanostructures of thin films. The methodology identifies and isolates nanostructures using Atomic Force Microscopy (AFM) data and extracts quantitative information, such as their size and shape. The fuzzy logic based methodology relies on a Fuzzy Inference Engine (FIE) to classify the data points as being top, bottom, uphill, or downhill. The resulting data sets are then further processed to extract quantitative information about the nanostructures. In the present work we introduce a mechanism which can consistently distinguish crowded surfaces from those with sparsely distributed structures and present an omni-directional searchmore » technique to improve the structural recognition accuracy. In order to demonstrate the effectiveness of our approach we present a case study which uses our approach to quantitatively identify particle sizes of two specimens each with a unique gold nanoparticle size distribution. - Research Highlights: {yields} A Fuzzy logic analysis technique capable of characterizing AFM images of thin films. {yields} The technique is applicable to different surfaces regardless of their densities. {yields} Fuzzy logic technique does not require manual adjustment of the algorithm parameters. {yields} The technique can quantitatively capture differences between surfaces. {yields} This technique yields more realistic structure boundaries compared to other methods.« less
Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data
NASA Astrophysics Data System (ADS)
Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.
2016-06-01
Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.
NASA Astrophysics Data System (ADS)
Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.
2014-01-01
Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
[Research on Control System of an Exoskeleton Upper-limb Rehabilitation Robot].
Wang, Lulu; Hu, Xin; Hu, Jie; Fang, Youfang; He, Rongrong; Yu, Hongliu
2016-12-01
In order to help the patients with upper-limb disfunction go on rehabilitation training,this paper proposed an upper-limb exoskeleton rehabilitation robot with four degrees of freedom(DOF),and realized two control schemes,i.e.,voice control and electromyography control.The hardware and software design of the voice control system was completed based on RSC-4128 chips,which realized the speech recognition technology of a specific person.Besides,this study adapted self-made surface eletromyogram(sEMG)signal extraction electrodes to collect sEMG signals and realized pattern recognition by conducting sEMG signals processing,extracting time domain features and fixed threshold algorithm.In addition,the pulse-width modulation(PWM)algorithm was used to realize the speed adjustment of the system.Voice control and electromyography control experiments were then carried out,and the results showed that the mean recognition rate of the voice control and electromyography control reached 93.1%and 90.9%,respectively.The results proved the feasibility of the control system.This study is expected to lay a theoretical foundation for the further improvement of the control system of the upper-limb rehabilitation robot.
Yi, Chucai; Tian, Yingli
2012-09-01
In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.
Carroll, John A; Smith, Helen E; Scott, Donia; Cassell, Jackie A
2016-01-01
Background Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall). PMID:26911811
Algorithm of pulmonary emphysema extraction using thoracic 3D CT images
NASA Astrophysics Data System (ADS)
Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki
2007-03-01
Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.
Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images
NASA Astrophysics Data System (ADS)
Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.
2006-03-01
Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.
A novel approach to segmentation and measurement of medical image using level set methods.
Chen, Yao-Tien
2017-06-01
The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diegert, C.; Sanders, J.A.; Orrison, W.W. Jr.
1992-12-31
Researchers working with MR observations generally agree that far more information is available in a volume (3D) observation than is considered for diagnosis. The key to the new alignment method is in basing it on available information on surfaces. Using the skin surface is effective a robust algorithm can reliably extract this surface from almost any scan of the head, and a human operator`s exquisite sensitivity to facial features is allows him to manually align skin surfaces with precision. Following the definitions, we report on a preliminary experiment where we align three MR observations taken during a single MR examination,more » each weighting arterial, venous, and tissue features. When accurately aligned, a neurosurgeon can use these features as anatomical landmarks for planning and executing interventional procedures.« less
The comparison and analysis of extracting video key frame
NASA Astrophysics Data System (ADS)
Ouyang, S. Z.; Zhong, L.; Luo, R. Q.
2018-05-01
Video key frame extraction is an important part of the large data processing. Based on the previous work in key frame extraction, we summarized four important key frame extraction algorithms, and these methods are largely developed by comparing the differences between each of two frames. If the difference exceeds a threshold value, take the corresponding frame as two different keyframes. After the research, the key frame extraction based on the amount of mutual trust is proposed, the introduction of information entropy, by selecting the appropriate threshold values into the initial class, and finally take a similar mean mutual information as a candidate key frame. On this paper, several algorithms is used to extract the key frame of tunnel traffic videos. Then, with the analysis to the experimental results and comparisons between the pros and cons of these algorithms, the basis of practical applications is well provided.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Automated extraction and classification of time-frequency contours in humpback vocalizations.
Ou, Hui; Au, Whitlow W L; Zurk, Lisa M; Lammers, Marc O
2013-01-01
A time-frequency contour extraction and classification algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantified by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could benefit a variety of applications such as species description, identification, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%-4%) under noisy environments with small boat vessels and snapping shrimp. The classification algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classified. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classified into 12 distinctive types.
Zhang, Baolin; Tong, Xinglin; Hu, Pan; Guo, Qian; Zheng, Zhiyuan; Zhou, Chaoran
2016-12-26
Optical fiber Fabry-Perot (F-P) sensors have been used in various on-line monitoring of physical parameters such as acoustics, temperature and pressure. In this paper, a wavelet phase extracting demodulation algorithm for optical fiber F-P sensing is first proposed. In application of this demodulation algorithm, search range of scale factor is determined by estimated cavity length which is obtained by fast Fourier transform (FFT) algorithm. Phase information of each point on the optical interference spectrum can be directly extracted through the continuous complex wavelet transform without de-noising. And the cavity length of the optical fiber F-P sensor is calculated by the slope of fitting curve of the phase. Theorical analysis and experiment results show that this algorithm can greatly reduce the amount of computation and improve demodulation speed and accuracy.
GPU-accelerated phase extraction algorithm for interferograms: a real-time application
NASA Astrophysics Data System (ADS)
Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei
2016-11-01
Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.
A preliminary study on ice shape tracing with a laser light sheet
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Vargas, Mario; Oldenburg, John R.
1993-01-01
Preliminary work towards the development of an automated method of measuring the shape of ice forming on an airfoil during wind tunnel tests has been completed. A thin sheet of light illuminated the front surfaces of rime, glaze, and mixed ice shapes and a solid-state camera recorded images of each. A maximum intensity algorithm extracted the profiles of the ice shapes and the results were compared to hand tracings. Very good general agreement was found in each case.
Auzias, G; Brun, L; Deruelle, C; Coulon, O
2015-05-01
Recent interest has been growing concerning points of maximum depth within folds, the sulcal pits, that can be used as reliable cortical landmarks. These remarkable points on the cortical surface are defined algorithmically as the outcome of an automatic extraction procedure. The influence of several crucial parameters of the reference technique (Im et al., 2010) has not been evaluated extensively, and no optimization procedure has been proposed so far. Designing an appropriate optimization framework for these parameters is mandatory to guarantee the reproducibility of results across studies and to ensure the feasibility of sulcal pit extraction and analysis on large cohorts. In this work, we propose a framework specifically dedicated to the optimization of the parameters of the method. This optimization framework relies on new measures for better quantifying the reproducibility of the number of sulcal pits per region across individuals, in line with the assumptions of one-to-one correspondence of sulcal roots across individuals which is an explicit aspect of the sulcal roots model (Régis et al., 2005). Our procedure benefits from a combination of improvements, including the use of a convenient sulcal depth estimation and is methodologically sound. Our experiments on two different groups of individuals, with a total of 137 subjects, show an increased reliability across subjects in deeper sulcal pits, as compared to the previous approach, and cover the entire cortical surface, including shallower and more variable folds that were not considered before. The effectiveness of our method ensures the feasibility of a systematic study of sulcal pits on large cohorts. On top of these methodological advances, we quantify the relationship between the reproducibility of the number of sulcal pits per region across individuals and their respective depth and demonstrate the relatively high reproducibility of several pits corresponding to shallower folds. Finally, we report new results regarding the local pit asymmetry, providing evidence that the algorithmic and conceptual approach defended here may contribute to better understanding of the key role of sulcal pits in neuroanatomy. Copyright © 2015 Elsevier Inc. All rights reserved.
Delhaye, Benoit P; Schluter, Erik W; Bensmaia, Sliman J
2016-01-01
Efforts are underway to restore sensorimotor function in amputees and tetraplegic patients using anthropomorphic robotic hands. For this approach to be clinically viable, sensory signals from the hand must be relayed back to the patient. To convey tactile feedback necessary for object manipulation, behaviorally relevant information must be extracted in real time from the output of sensors on the prosthesis. In the present study, we recorded the sensor output from a state-of-the-art bionic finger during the presentation of different tactile stimuli, including punctate indentations and scanned textures. Furthermore, the parameters of stimulus delivery (location, speed, direction, indentation depth, and surface texture) were systematically varied. We developed simple decoders to extract behaviorally relevant variables from the sensor output and assessed the degree to which these algorithms could reliably extract these different types of sensory information across different conditions of stimulus delivery. We then compared the performance of the decoders to that of humans in analogous psychophysical experiments. We show that straightforward decoders can extract behaviorally relevant features accurately from the sensor output and most of them outperform humans.
Automatic extraction of planetary image features
NASA Technical Reports Server (NTRS)
LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)
2013-01-01
A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.
Feature detection on 3D images of dental imprints
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Laurendeau, Denis
1994-09-01
A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.
Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng
2017-01-01
Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474
A compressed sensing method with analytical results for lidar feature classification
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark
2011-04-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.
Kupas, Katrin; Ultsch, Alfred; Klebe, Gerhard
2008-05-15
A new method to discover similar substructures in protein binding pockets, independently of sequence and folding patterns or secondary structure elements, is introduced. The solvent-accessible surface of a binding pocket, automatically detected as a depression on the protein surface, is divided into a set of surface patches. Each surface patch is characterized by its shape as well as by its physicochemical characteristics. Wavelets defined on surfaces are used for the description of the shape, as they have the great advantage of allowing a comparison at different resolutions. The number of coefficients to describe the wavelets can be chosen with respect to the size of the considered data set. The physicochemical characteristics of the patches are described by the assignment of the exposed amino acid residues to one or more of five different properties determinant for molecular recognition. A self-organizing neural network is used to project the high-dimensional feature vectors onto a two-dimensional layer of neurons, called a map. To find similarities between the binding pockets, in both geometrical and physicochemical features, a clustering of the projected feature vector is performed using an automatic distance- and density-based clustering algorithm. The method was validated with a small training data set of 109 binding cavities originating from a set of enzymes covering 12 different EC numbers. A second test data set of 1378 binding cavities, extracted from enzymes of 13 different EC numbers, was then used to prove the discriminating power of the algorithm and to demonstrate its applicability to large scale analyses. In all cases, members of the data set with the same EC number were placed into coherent regions on the map, with small distances between them. Different EC numbers are separated by large distances between the feature vectors. A third data set comprising three subfamilies of endopeptidases is used to demonstrate the ability of the algorithm to detect similar substructures between functionally related active sites. The algorithm can also be used to predict the function of novel proteins not considered in training data set. 2007 Wiley-Liss, Inc.
Lefranc, Sandrine; Roca, Pauline; Perrot, Matthieu; Poupon, Cyril; Le Bihan, Denis; Mangin, Jean-François; Rivière, Denis
2016-05-01
Segregating the human cortex into distinct areas based on structural connectivity criteria is of widespread interest in neuroscience. This paper presents a groupwise connectivity-based parcellation framework for the whole cortical surface using a new high quality diffusion dataset of 79 healthy subjects. Our approach performs gyrus by gyrus to parcellate the whole human cortex. The main originality of the method is to compress for each gyrus the connectivity profiles used for the clustering without any anatomical prior information. This step takes into account the interindividual cortical and connectivity variability. To this end, we consider intersubject high density connectivity areas extracted using a surface-based watershed algorithm. A wide validation study has led to a fully automatic pipeline which is robust to variations in data preprocessing (tracking type, cortical mesh characteristics and boundaries of initial gyri), data characteristics (including number of subjects), and the main algorithmic parameters. A remarkable reproducibility is achieved in parcellation results for the whole cortex, leading to clear and stable cortical patterns. This reproducibility has been tested across non-overlapping subgroups and the validation is presented mainly on the pre- and postcentral gyri. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gupta, Mousumi; Chatterjee, Somenath
2018-04-01
Surface texture is an important issue to realize the nature (crest and trough) of surfaces. Atomic force microscopy (AFM) image is a key analysis for surface topography. However, in nano-scale, the nature (i.e., deflection or crack) as well as quantification (i.e., height or depth) of deposited layers is essential information for material scientist. In this paper, a gradient-based K-means algorithm is used to differentiate the layered surfaces depending on their color contrast of as-obtained from AFM images. A transformation using wavelet decomposition is initiated to extract the information about deflection or crack on the material surfaces from the same images. Z-axis depth analysis from wavelet coefficients provides information about the crack present in the material. Using the above method corresponding surface information for the material is obtained. In addition, the Gaussian filter is applied to remove the unwanted lines, which occurred during AFM scanning. Few known samples are taken as input, and validity of the above approaches is shown.
Reconstruction of 3d Models from Point Clouds with Hybrid Representation
NASA Astrophysics Data System (ADS)
Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.
2018-05-01
The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.
A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow
NASA Astrophysics Data System (ADS)
Yu, Lei; Xia, Mingliang; Xuan, Li
2013-10-01
The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.
Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking
NASA Astrophysics Data System (ADS)
He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.
2018-04-01
The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)
NASA Technical Reports Server (NTRS)
Masek, Jeffrey G.
2006-01-01
The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.
Frequency domain surface EMG sensor fusion for estimating finger forces.
Potluri, Chandrasekhar; Kumar, Parmod; Anugolu, Madhavi; Urfer, Alex; Chiu, Steve; Naidu, D; Schoen, Marco P
2010-01-01
Extracting or estimating skeletal hand/finger forces using surface electro myographic (sEMG) signals poses many challenges due to cross-talk, noise, and a temporal and spatially modulated signal characteristics. Normal sEMG measurements are based on single sensor data. In this paper, array sensors are used along with a proposed sensor fusion scheme that result in a simple Multi-Input-Single-Output (MISO) transfer function. Experimental data is used along with system identification to find this MISO system. A Genetic Algorithm (GA) approach is employed to optimize the characteristics of the MISO system. The proposed fusion-based approach is tested experimentally and indicates improvement in finger/hand force estimation.
[Identification of Pummelo Cultivars Based on Hyperspectral Imaging Technology].
Li, Xun-lan; Yi, Shi-lai; He, Shao-lan; Lü, Qiang; Xie, Rang-jin; Zheng, Yong-qiang; Deng, Lie
2015-09-01
Existing methods for the identification of pummelo cultivars are usually time-consuming and costly, and are therefore inconvenient to be used in cases that a rapid identification is needed. This research was aimed at identifying different pummelo cultivars by hyperspectral imaging technology which can achieve a rapid and highly sensitive measurement. A total of 240 leaf samples, 60 for each of the four cultivars were investigated. Samples were divided into two groups such as calibration set (48 samples of each cultivar) and validation set (12 samples of each cultivar) by a Kennard-Stone-based algorithm. Hyperspectral images of both adaxial and abaxial surfaces of each leaf were obtained, and were segmented into a region of interest (ROI) using a simple threshold. Spectra of leaf samples were extracted from ROI. To remove the absolute noises of the spectra, only the date of spectral range 400~1000 nm was used for analysis. Multiplicative scatter correction (MSC) and standard normal variable (SNV) were utilized for data preprocessing. Principal component analysis (PCA) was used to extract the best principal components, and successive projections algorithm (SPA) was used to extract the effective wavelengths. Least squares support vector machine (LS-SVM) was used to obtain the discrimination model of the four different pummelo cultivars. To find out the optimal values of σ2 and γ which were important parameters in LS-SVM modeling, Grid-search technique and Cross-Validation were applied. The first 10 and 11 principal components were extracted by PCA for the hyperspectral data of adaxial surface and abaxial surface, respectively. There were 31 and 21 effective wavelengths selected by SPA based on the hyperspectral data of adaxial surface and abaxial surface, respectively. The best principal components and the effective wavelengths were used as inputs of LS-SVM models, and then the PCA-LS-SVM model and the SPA-LS-SVM model were built. The results showed that 99.46% and 98.44% of identification accuracy was achieved in the calibration set for the PCA-LS-SVM model and the SPA-LS-SVM model, respectively, and a 95.83% of identification accuracy was achieved in the validation set for both the PCA-LS-SVM and the SPA- LS-SVM models, which were built based on the hyperspectral data of adaxial surface. Comparatively, the results of the PCA-LS-SVM and the SPA-LS-SVM models built based on the hyperspectral data of abaxial surface both achieved identification accuracies of 100% for both calibration set and validation set. The overall results demonstrated that use of hyperspectral data of adaxial and abaxial leaf surfaces coupled with the use of PCA-LS-SVM and the SPA-LS-SVM could achieve an accurate identification of pummelo cultivars. It was feasible to use hyperspectral imaging technology to identify different pummelo cultivars, and hyperspectral imaging technology provided an alternate way of rapid identification of pummelo cultivars. Moreover, the results in this paper demonstrated that the data from the abaxial surface of leaf was more sensitive in identifying pummelo cultivars. This study provided a new method for to the fast discrimination of pummelo cultivars.
Quantitative analysis of airway abnormalities in CT
NASA Astrophysics Data System (ADS)
Petersen, Jens; Lo, Pechin; Nielsen, Mads; Edula, Goutham; Ashraf, Haseem; Dirksen, Asger; de Bruijne, Marleen
2010-03-01
A coupled surface graph cut algorithm for airway wall segmentation from Computed Tomography (CT) images is presented. Using cost functions that highlight both inner and outer wall borders, the method combines the search for both borders into one graph cut. The proposed method is evaluated on 173 manually segmented images extracted from 15 different subjects and shown to give accurate results, with 37% less errors than the Full Width at Half Maximum (FWHM) algorithm and 62% less than a similar graph cut method without coupled surfaces. Common measures of airway wall thickness such as the Interior Area (IA) and Wall Area percentage (WA%) was measured by the proposed method on a total of 723 CT scans from a lung cancer screening study. These measures were significantly different for participants with Chronic Obstructive Pulmonary Disease (COPD) compared to asymptomatic participants. Furthermore, reproducibility was good as confirmed by repeat scans and the measures correlated well with the outcomes of pulmonary function tests, demonstrating the use of the algorithm as a COPD diagnostic tool. Additionally, a new measure of airway wall thickness is proposed, Normalized Wall Intensity Sum (NWIS). NWIS is shown to correlate better with lung function test values and to be more reproducible than previous measures IA, WA% and airway wall thickness at a lumen perimeter of 10 mm (PI10).
NASA Technical Reports Server (NTRS)
Brumfield, J. O.; Bloemer, H. H. L.; Campbell, W. J.
1981-01-01
Two unsupervised classification procedures for analyzing Landsat data used to monitor land reclamation in a surface mining area in east central Ohio are compared for agreement with data collected from the corresponding locations on the ground. One procedure is based on a traditional unsupervised-clustering/maximum-likelihood algorithm sequence that assumes spectral groupings in the Landsat data in n-dimensional space; the other is based on a nontraditional unsupervised-clustering/canonical-transformation/clustering algorithm sequence that not only assumes spectral groupings in n-dimensional space but also includes an additional feature-extraction technique. It is found that the nontraditional procedure provides an appreciable improvement in spectral groupings and apparently increases the level of accuracy in the classification of land cover categories.
Estimation of tool wear length in finish milling using a fuzzy inference algorithm
NASA Astrophysics Data System (ADS)
Ko, Tae Jo; Cho, Dong Woo
1993-10-01
The geometric accuracy and surface roughness are mainly affected by the flank wear at the minor cutting edge in finish machining. A fuzzy estimator obtained by a fuzzy inference algorithm with a max-min composition rule to evaluate the minor flank wear length in finish milling is introduced. The features sensitive to minor flank wear are extracted from the dispersion analysis of a time series AR model of the feed directional acceleration of the spindle housing. Linguistic rules for fuzzy estimation are constructed using these features, and then fuzzy inferences are carried out with test data sets under various cutting conditions. The proposed system turns out to be effective for estimating minor flank wear length, and its mean error is less than 12%.
NASA Astrophysics Data System (ADS)
Silvestri, Malvina; Musacchio, Massimo; Fabrizia Buongiorno, Maria
2017-04-01
The Geohazards Exploitation Platform, or GEP is one of six Thematic Exploitation Platforms developed by ESA to serve data user communities. As a new element of the ground segment delivering satellite results to users, these cloud-based platforms provide an online environment to access information, processing tools, computing resources for community collaboration. The aim is to enable the easy extraction of valuable knowledge from vast quantities of satellite-sensed data now being produced by Europe's Copernicus programme and other Earth observation satellites. In this context, the estimation of surface temperature on active volcanoes around the world is considered. E2E processing chains have been developed for different satellite data (ASTER, Landsat8 and Sentinel 3 missions) using thermal infrared (TIR) channels by applying specific algorithms. These chains have been implemented on the GEP platform enabling the use of EO missions and the generation of added value product such as surface temperature map, from not skilled users. This solution will enhance the use of satellite data and improve the dissemination of the results saving valuable time (no manual browsing, downloading or processing is needed) and producing time series data that can be speedily extracted from a single co-registered pixel, to highlight gradual trends within a narrow area. Moreover, thanks to the high-resolution optical imagery of Sentinel 2 (MSI), the detection of lava maps during an eruption can be automatically obtained. The proposed lava detection method is based on a contextual algorithm applied to Sentinel-2 NIR (band 8 - 0.8 micron) and SWIR (band 12 - 2.25 micron) data. Examples derived by last eruptions on active volcanoes are showed.
Ukwatta, Eranga; Yuan, Jing; Qiu, Wu; Rajchl, Martin; Chiu, Bernard; Fenster, Aaron
2015-12-01
Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20. Copyright © 2015 Elsevier B.V. All rights reserved.
A Novel Extraction Approach of Extrinsic and Intrinsic Parameters of InGaAs/GaN pHEMTs
2015-07-01
presented, for the first time, artificial bee colony algorithm is applied to the global-optimization based parameter extraction and a novel intrinsic...conservation of the gate charge is well satisfied which further validates this novel extraction method. Index Terms —InGaAs/GaN pHEMTs, artificial bee ...increase the uniqueness of the extraction. Artificial bee colony (ABC) algorithm is adopted as the optimizer due to its excellent ability to escape
Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio
2015-01-01
The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588
Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction
NASA Astrophysics Data System (ADS)
Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.
2015-12-01
A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional validation techniques are necessary for state-of-the-art flood inundation models. In addition, the semi-automated, unstructured mesh generation process presented herein increases the overall accuracy of simulated storm surge across the floodplain without reliance on hand digitization or sacrificing computational cost.
NASA Astrophysics Data System (ADS)
Castro-Mateos, Isaac; Pozo, Jose M.; Lazary, Aron; Frangi, Alejandro F.
2016-03-01
Computational medicine aims at developing patient-specific models to help physicians in the diagnosis and treatment selection for patients. The spine, and other skeletal structures, is an articulated object, composed of rigid bones (vertebrae) and non-rigid parts (intervertebral discs (IVD), ligaments and muscles). These components are usually extracted from different image modalities, involving patient repositioning. In the case of the spine, these models require the segmentation of IVDs from MR and vertebrae from CT. In the literature, there exists a vast selection of segmentations methods, but there is a lack of approaches to align the vertebrae and IVDs. This paper presents a method to create patient-specific finite element meshes for biomechanical simulations, integrating rigid and non-rigid parts of articulated objects. First, the different parts are aligned in a complete surface model. Vertebrae extracted from CT are rigidly repositioned in between the IVDs, initially using the IVDs location and then refining the alignment using the MR image with a rigid active shape model algorithm. Finally, a mesh morphing algorithm, based on B-splines, is employed to map a template finite-element (volumetric) mesh to the patient-specific surface mesh. This morphing reduces possible misalignments and guarantees the convexity of the model elements. Results show that the accuracy of the method to align vertebrae into MR, together with IVDs, is similar to that of the human observers. Thus, this method is a step forward towards the automation of patient-specific finite element models for biomechanical simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
NASA Astrophysics Data System (ADS)
Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul
2016-10-01
Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.
NASA Astrophysics Data System (ADS)
Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip
2018-06-01
Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.
Iris recognition based on key image feature extraction.
Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y
2008-01-01
In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.
NASA Astrophysics Data System (ADS)
Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing
2015-03-01
We have developed a new fluorescence molecular tomography (FMT) imaging system, in which we utilized a phase shifting method to extract the mouse surface geometry optically and a rotary laser scanning approach to excite fluorescence molecules and acquire fluorescent measurements on the whole mouse body. Nine fringe patterns with a phase shifting of 2π/9 are projected onto the mouse surface by a projector. The fringe patterns are captured using a webcam to calculate a phase map that is converted to the geometry of the mouse surface with our algorithms. We used a DigiWarp approach to warp a finite element mesh of a standard digital mouse to the measured mouse surface thus the tedious and time-consuming procedure from a point cloud to mesh is avoided. Experimental results indicated that the proposed method is accurate with errors less than 0.5 mm. In the FMT imaging system, the mouse is placed inside a conical mirror and scanned with a line pattern laser that is mounted on a rotation stage. After being reflected by the conical mirror, the emitted fluorescence photons travel through central hole of the rotation stage and the band pass filters in a motorized filter wheel, and are collected by a CCD camera. Phantom experimental results of the proposed new FMT imaging system can reconstruct the target accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different typesmore » of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.« less
Drop shape visualization and contact angle measurement on curved surfaces.
Guilizzoni, Manfredo
2011-12-01
The shape and contact angles of drops on curved surfaces is experimentally investigated. Image processing, spline fitting and numerical integration are used to extract the drop contour in a number of cross-sections. The three-dimensional surfaces which describe the surface-air and drop-air interfaces can be visualized and a simple procedure to determine the equilibrium contact angle starting from measurements on curved surfaces is proposed. Contact angles on flat surfaces serve as a reference term and a procedure to measure them is proposed. Such procedure is not as accurate as the axisymmetric drop shape analysis algorithms, but it has the advantage of requiring only a side view of the drop-surface couple and no further information. It can therefore be used also for fluids with unknown surface tension and there is no need to measure the drop volume. Examples of application of the proposed techniques for distilled water drops on gemstones confirm that they can be useful for drop shape analysis and contact angle measurement on three-dimensional sculptured surfaces. Copyright © 2011 Elsevier Inc. All rights reserved.
Large Oil Spill Classification Using SAR Images Based on Spatial Histogram
NASA Astrophysics Data System (ADS)
Schvartzman, I.; Havivi, S.; Maman, S.; Rotman, S. R.; Blumberg, D. G.
2016-06-01
Among the different types of marine pollution, oil spill is a major threat to the sea ecosystems. Remote sensing is used in oil spill response. Synthetic Aperture Radar (SAR) is an active microwave sensor that operates under all weather conditions and provides information about the surface roughness and covers large areas at a high spatial resolution. SAR is widely used to identify and track pollutants in the sea, which may be due to a secondary effect of a large natural disaster or by a man-made one . The detection of oil spill in SAR imagery relies on the decrease of the backscattering from the sea surface, due to the increased viscosity, resulting in a dark formation that contrasts with the brightness of the surrounding area. Most of the use of SAR images for oil spill detection is done by visual interpretation. Trained interpreters scan the image, and mark areas of low backscatter and where shape is a-symmetrical. It is very difficult to apply this method for a wide area. In contrast to visual interpretation, automatic detection algorithms were suggested and are mainly based on scanning dark formations, extracting features, and applying big data analysis. We propose a new algorithm that applies a nonlinear spatial filter that detects dark formations and is not susceptible to noises, such as internal or speckle. The advantages of this algorithm are both in run time and the results retrieved. The algorithm was tested in genesimulations as well as on COSMO-SkyMed images, detecting the Deep Horizon oil spill in the Gulf of Mexico (occurred on 20/4/2010). The simulation results show that even in a noisy environment, oil spill is detected. Applying the algorithm to the Deep Horizon oil spill, the algorithm classified the oil spill better than focusing on dark formation algorithm. Furthermore, the results were validated by the National Oceanic and Atmospheric Administration (NOAA) data.
Research on Palmprint Identification Method Based on Quantum Algorithms
Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165
A graph-Laplacian-based feature extraction algorithm for neural spike sorting.
Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos
2009-01-01
Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xiaojia; Mao Qirong; Zhan Yongzhao
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Piccinelli, Marina; Faber, Tracy L; Arepalli, Chesnal D; Appia, Vikram; Vinten-Johansen, Jakob; Schmarkey, Susan L; Folks, Russell D; Garcia, Ernest V; Yezzi, Anthony
2014-02-01
Accurate alignment between cardiac CT angiographic studies (CTA) and nuclear perfusion images is crucial for improved diagnosis of coronary artery disease. This study evaluated in an animal model the accuracy of a CTA fully automated biventricular segmentation algorithm, a necessary step for automatic and thus efficient PET/CT alignment. Twelve pigs with acute infarcts were imaged using Rb-82 PET and 64-slice CTA. Post-mortem myocardium mass measurements were obtained. Endocardial and epicardial myocardial boundaries were manually and automatically detected on the CTA and both segmentations used to perform PET/CT alignment. To assess the segmentation performance, image-based myocardial masses were compared to experimental data; the hand-traced profiles were used as a reference standard to assess the global and slice-by-slice robustness of the automated algorithm in extracting myocardium, LV, and RV. Mean distances between the automated and the manual 3D segmented surfaces were computed. Finally, differences in rotations and translations between the manual and automatic surfaces were estimated post-PET/CT alignment. The largest, smallest, and median distances between interactive and automatic surfaces averaged 1.2 ± 2.1, 0.2 ± 1.6, and 0.7 ± 1.9 mm. The average angular and translational differences in CT/PET alignments were 0.4°, -0.6°, and -2.3° about x, y, and z axes, and 1.8, -2.1, and 2.0 mm in x, y, and z directions. Our automatic myocardial boundary detection algorithm creates surfaces from CTA that are similar in accuracy and provide similar alignments with PET as those obtained from interactive tracing. Specific difficulties in a reliable segmentation of the apex and base regions will require further improvements in the automated technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Neal R; Ruggiero, Christy E; Pawley, Norma H
2009-01-01
Detecting complex targets, such as facilities, in commercially available satellite imagery is a difficult problem that human analysts try to solve by applying world knowledge. Often there are known observables that can be extracted by pixel-level feature detectors that can assist in the facility detection process. Individually, each of these observables is not sufficient for an accurate and reliable detection, but in combination, these auxiliary observables may provide sufficient context for detection by a machine learning algorithm. We describe an approach for automatic detection of facilities that uses an automated feature extraction algorithm to extract auxiliary observables, and a semi-supervisedmore » assisted target recognition algorithm to then identify facilities of interest. We illustrate the approach using an example of finding schools in Quickbird image data of Albuquerque, New Mexico. We use Los Alamos National Laboratory's Genie Pro automated feature extraction algorithm to find a set of auxiliary features that should be useful in the search for schools, such as parking lots, large buildings, sports fields and residential areas and then combine these features using Genie Pro's assisted target recognition algorithm to learn a classifier that finds schools in the image data.« less
NASA Astrophysics Data System (ADS)
Mola Ebrahimi, S.; Arefi, H.; Rasti Veis, H.
2017-09-01
Our paper aims to present a new approach to identify and extract building footprints using aerial images and LiDAR data. Employing an edge detector algorithm, our method first extracts the outer boundary of buildings, and then by taking advantage of Hough transform and extracting the boundary of connected buildings in a building block, it extracts building footprints located in each block. The proposed method first recognizes the predominant leading orientation of a building block using Hough transform, and then rotates the block according to the inverted complement of the dominant line's angle. Therefore the block poses horizontally. Afterwards, by use of another Hough transform, vertical lines, which might be the building boundaries of interest, are extracted and the final building footprints within a block are obtained. The proposed algorithm is implemented and tested on the urban area of Zeebruges, Belgium(IEEE Contest,2015). The areas of extracted footprints are compared to the corresponding areas in the reference data and mean error is equal to 7.43 m2. Besides, qualitative and quantitative evaluations suggest that the proposed algorithm leads to acceptable results in automated precise extraction of building footprints.
Shirodkar, Priyanka V; Muraleedharan, Usha Devi
2017-11-26
Amylases are a group of enzymes with a wide variety of industrial applications. Enhancement of α-amylase production from the marine protists, thraustochytrids has been attempted for the first time by applying statistical-based experimental designs using response surface methodology (RSM) and genetic algorithm (GA) for optimization of the most influencing process variables. A full factorial central composite experimental design was used to study the cumulative interactive effect of nutritional components viz., glucose, corn starch, and yeast extract. RSM was performed on two objectives, that is, growth of Ulkenia sp. AH-2 (ATCC® PRA-296) and α-amylase activity. When GA was conducted for maximization of the enzyme activity, the optimal α-amylase activity was found to be 71.20 U/mL which was close to that obtained by RSM (71.93 U/mL), both of which were in agreement with the predicted value of 72.37 U/mL. Optimal growth at the optimized process variables was found to be 1.89A 660nm . The optimized medium increased α-amylase production by 1.2-fold.
Extracting DNA words based on the sequence features: non-uniform distribution and integrity.
Li, Zhi; Cao, Hongyan; Cui, Yuehua; Zhang, Yanbo
2016-01-25
DNA sequence can be viewed as an unknown language with words as its functional units. Given that most sequence alignment algorithms such as the motif discovery algorithms depend on the quality of background information about sequences, it is necessary to develop an ab initio algorithm for extracting the "words" based only on the DNA sequences. We considered that non-uniform distribution and integrity were two important features of a word, based on which we developed an ab initio algorithm to extract "DNA words" that have potential functional meaning. A Kolmogorov-Smirnov test was used for consistency test of uniform distribution of DNA sequences, and the integrity was judged by the sequence and position alignment. Two random base sequences were adopted as negative control, and an English book was used as positive control to verify our algorithm. We applied our algorithm to the genomes of Saccharomyces cerevisiae and 10 strains of Escherichia coli to show the utility of the methods. The results provide strong evidences that the algorithm is a promising tool for ab initio building a DNA dictionary. Our method provides a fast way for large scale screening of important DNA elements and offers potential insights into the understanding of a genome.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
NASA Astrophysics Data System (ADS)
Paino, A.; Keller, J.; Popescu, M.; Stone, K.
2014-06-01
In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.
NASA Astrophysics Data System (ADS)
Lin, Chuang; Wang, Binghui; Jiang, Ning; Farina, Dario
2018-04-01
Objective. This paper proposes a novel simultaneous and proportional multiple degree of freedom (DOF) myoelectric control method for active prostheses. Approach. The approach is based on non-negative matrix factorization (NMF) of surface EMG signals with the inclusion of sparseness constraints. By applying a sparseness constraint to the control signal matrix, it is possible to extract the basis information from arbitrary movements (quasi-unsupervised approach) for multiple DOFs concurrently. Main Results. In online testing based on target hitting, able-bodied subjects reached a greater throughput (TP) when using sparse NMF (SNMF) than with classic NMF or with linear regression (LR). Accordingly, the completion time (CT) was shorter for SNMF than NMF or LR. The same observations were made in two patients with unilateral limb deficiencies. Significance. The addition of sparseness constraints to NMF allows for a quasi-unsupervised approach to myoelectric control with superior results with respect to previous methods for the simultaneous and proportional control of multi-DOF. The proposed factorization algorithm allows robust simultaneous and proportional control, is superior to previous supervised algorithms, and, because of minimal supervision, paves the way to online adaptation in myoelectric control.
Membership-degree preserving discriminant analysis with applications to face recognition.
Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun
2013-01-01
In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Mesin, Luca
2015-02-01
Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.
Semi-automated surface mapping via unsupervised classification
NASA Astrophysics Data System (ADS)
D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.
2017-09-01
Due to the increasing volume of the returned data from space mission, the human search for correlation and identification of interesting features becomes more and more unfeasible. Statistical extraction of features via machine learning methods will increase the scientific output of remote sensing missions and aid the discovery of yet unknown feature hidden in dataset. Those methods exploit algorithm trained on features from multiple instrument, returning classification maps that explore intra-dataset correlation, allowing for the discovery of unknown features. We present two applications, one for Mercury and one for Vesta.
Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video
Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan
2017-01-01
Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
Online particle detection with Neural Networks based on topological calorimetry information
NASA Astrophysics Data System (ADS)
Ciodaro, T.; Deva, D.; de Seixas, J. M.; Damazio, D.
2012-06-01
This paper presents the latest results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction using the ATLAS calorimetry information (energy measurements). The extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time by 59%. Also, the total memory necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount.
Improve threshold segmentation using features extraction to automatic lung delimitation.
França, Cleunio; Vasconcelos, Germano; Diniz, Paula; Melo, Pedro; Diniz, Jéssica; Novaes, Magdala
2013-01-01
With the consolidation of PACS and RIS systems, the development of algorithms for tissue segmentation and diseases detection have intensely evolved in recent years. These algorithms have advanced to improve its accuracy and specificity, however, there is still some way until these algorithms achieved satisfactory error rates and reduced processing time to be used in daily diagnosis. The objective of this study is to propose a algorithm for lung segmentation in x-ray computed tomography images using features extraction, as Centroid and orientation measures, to improve the basic threshold segmentation. As result we found a accuracy of 85.5%.
Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta
2016-05-01
Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments. Copyright © 2016 Elsevier B.V. All rights reserved.
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Information extraction and transmission techniques for spaceborne synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.
1984-01-01
Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
A new morphology algorithm for shoreline extraction from DEM data
NASA Astrophysics Data System (ADS)
Yousef, Amr H.; Iftekharuddin, Khan; Karim, Mohammad
2013-03-01
Digital elevation models (DEMs) are a digital representation of elevations at regularly spaced points. They provide an accurate tool to extract the shoreline profiles. One of the emerging sources of creating them is light detection and ranging (LiDAR) that can capture a highly dense cloud points with high resolution that can reach 15 cm and 100 cm in the vertical and horizontal directions respectively in short periods of time. In this paper we present a multi-step morphological algorithm to extract shorelines locations from the DEM data and a predefined tidal datum. Unlike similar approaches, it utilizes Lowess nonparametric regression to estimate the missing values within the DEM file. Also, it will detect and eliminate the outliers and errors that result from waves, ships, etc by means of anomality test with neighborhood constrains. Because, there might be some significant broken regions such as branches and islands, it utilizes a constrained morphological open and close to reduce these artifacts that can affect the extracted shorelines. In addition, it eliminates docks, bridges and fishing piers along the extracted shorelines by means of Hough transform. Based on a specific tidal datum, the algorithm will segment the DEM data into water and land objects. Without sacrificing the accuracy and the spatial details of the extracted boundaries, the algorithm should smooth and extract the shoreline profiles by tracing the boundary pixels between the land and the water segments. For given tidal values, we qualitatively assess the visual quality of the extracted shorelines by superimposing them on the available aerial photographs.
NASA Astrophysics Data System (ADS)
Englander, J. G.; Brodrick, P. G.; Brandt, A. R.
2015-12-01
Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Analysis on optical heterodyne frequency error of full-field heterodyne interferometer
NASA Astrophysics Data System (ADS)
Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli
2017-06-01
The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1976-01-01
A computer algorithm for extracting a quaternion from a direction-cosine matrix (DCM) is described. The quaternion provides a four-parameter representation of rotation, as against the nine-parameter representation afforded by a DCM. Commanded attitude in space shuttle steering is conveniently computed by DCM, while actual attitude is computed most compactly as a quaternion, as is attitude error. The unit length of the rotation quaternion, and interchangeable of a quaternion and its negative, are used to advantage in the extraction algorithm. Protection of the algorithm against square root failure and division overflow are considered. Necessary and sufficient conditions for handling the rotation vector element of largest magnitude are discussed
New solutions and applications of 3D computer tomography image processing
NASA Astrophysics Data System (ADS)
Effenberger, Ira; Kroll, Julia; Verl, Alexander
2008-02-01
As nowadays the industry aims at fast and high quality product development and manufacturing processes a modern and efficient quality inspection is essential. Compared to conventional measurement technologies, industrial computer tomography (CT) is a non-destructive technology for 3D-image data acquisition which helps to overcome their disadvantages by offering the possibility to scan complex parts with all outer and inner geometric features. In this paper new and optimized methods for 3D image processing, including innovative ways of surface reconstruction and automatic geometric feature detection of complex components, are presented, especially our work of developing smart online data processing and data handling methods, with an integrated intelligent online mesh reduction. Hereby the processing of huge and high resolution data sets is guaranteed. Besides, new approaches for surface reconstruction and segmentation based on statistical methods are demonstrated. On the extracted 3D point cloud or surface triangulation automated and precise algorithms for geometric inspection are deployed. All algorithms are applied to different real data sets generated by computer tomography in order to demonstrate the capabilities of the new tools. Since CT is an emerging technology for non-destructive testing and inspection more and more industrial application fields will use and profit from this new technology.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images
NASA Astrophysics Data System (ADS)
Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki
2008-03-01
Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.
NASA Astrophysics Data System (ADS)
Bunai, Tasya; Rokhmatuloh; Wibowo, Adi
2018-05-01
In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.
A portable foot-parameter-extracting system
NASA Astrophysics Data System (ADS)
Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan
2016-03-01
In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
NASA Astrophysics Data System (ADS)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
2017-03-02
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
Zhao, Yue; Zhu, Dianwen; Baikejiang, Reheman; Li, Changqing
2015-11-10
This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach.
An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Y.; Hu, X.; Guan, H.; Liu, P.
2016-06-01
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
NASA Astrophysics Data System (ADS)
Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue
2018-04-01
The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.
Gallina, Alessio; Garland, S Jayne; Wakeling, James M
2018-05-22
In this study, we investigated whether principal component analysis (PCA) and non-negative matrix factorization (NMF) perform similarly for the identification of regional activation within the human vastus medialis. EMG signals from 64 locations over the VM were collected from twelve participants while performing a low-force isometric knee extension. The envelope of the EMG signal of each channel was calculated by low-pass filtering (8 Hz) the monopolar EMG signal after rectification. The data matrix was factorized using PCA and NMF, and up to 5 factors were considered for each algorithm. Association between explained variance, spatial weights and temporal scores between the two algorithms were compared using Pearson correlation. For both PCA and NMF, a single factor explained approximately 70% of the variance of the signal, while two and three factors explained just over 85% or 90%. The variance explained by PCA and NMF was highly comparable (R > 0.99). Spatial weights and temporal scores extracted with non-negative reconstruction of PCA and NMF were highly associated (all p < 0.001, mean R > 0.97). Regional VM activation can be identified using high-density surface EMG and factorization algorithms. Regional activation explains up to 30% of the variance of the signal, as identified through both PCA and NMF. Copyright © 2018 Elsevier Ltd. All rights reserved.
Defect-detection algorithm for noncontact acoustic inspection using spectrum entropy
NASA Astrophysics Data System (ADS)
Sugimoto, Kazuko; Akamatsu, Ryo; Sugimoto, Tsuneyoshi; Utagawa, Noriyuki; Kuroda, Chitose; Katakura, Kageyoshi
2015-07-01
In recent years, the detachment of concrete from bridges or tunnels and the degradation of concrete structures have become serious social problems. The importance of inspection, repair, and updating is recognized in measures against degradation. We have so far studied the noncontact acoustic inspection method using airborne sound and the laser Doppler vibrometer. In this method, depending on the surface state (reflectance, dirt, etc.), the quantity of the light of the returning laser decreases and optical noise resulting from the leakage of light reception arises. Some influencing factors are the stability of the output of the laser Doppler vibrometer, the low reflective characteristic of the measurement surface, the diffused reflection characteristic, measurement distance, and laser irradiation angle. If defect detection depends only on the vibration energy ratio since the frequency characteristic of the optical noise resembles white noise, the detection of optical noise resulting from the leakage of light reception may indicate a defective part. Therefore, in this work, the combination of the vibrational energy ratio and spectrum entropy is used to judge whether a measured point is healthy or defective or an abnormal measurement point. An algorithm that enables more vivid detection of a defective part is proposed. When our technique was applied in an experiment with real concrete structures, the defective part could be extracted more vividly and the validity of our proposed algorithm was confirmed.
A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.
Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang
2016-12-07
The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.
LETTER TO THE EDITOR: Free-response operator characteristic models for visual search
NASA Astrophysics Data System (ADS)
Hutchinson, T. P.
2007-05-01
Computed tomography of diffraction enhanced imaging (DEI-CT) is a novel x-ray phase-contrast computed tomography which is applied to inspect weakly absorbing low-Z samples. Refraction-angle images which are extracted from a series of raw DEI images measured in different positions of the rocking curve of the analyser can be regarded as projections of DEI-CT. Based on them, the distribution of refractive index decrement in the sample can be reconstructed according to the principles of CT. How to combine extraction methods and reconstruction algorithms to obtain the most accurate reconstructed results is investigated in detail in this paper. Two kinds of comparison, the comparison of different extraction methods and the comparison between 'two-step' algorithms and the Hilbert filtered backprojection (HFBP) algorithm, draw the conclusion that the HFBP algorithm based on the maximum refraction-angle (MRA) method may be the best combination at present. Though all current extraction methods including the MRA method are approximate methods and cannot calculate very large refraction-angle values, the HFBP algorithm based on the MRA method is able to provide quite acceptable estimations of the distribution of refractive index decrement of the sample. The conclusion is proved by the experimental results at the Beijing Synchrotron Radiation Facility.
Parallel, stochastic measurement of molecular surface area.
Juba, Derek; Varshney, Amitabh
2008-08-01
Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.
On-line determination of pork color and intramuscular fat by computer vision
NASA Astrophysics Data System (ADS)
Liao, Yi-Tao; Fan, Yu-Xia; Wu, Xue-Qian; Xie, Li-juan; Cheng, Fang
2010-04-01
In this study, the application potential of computer vision in on-line determination of CIE L*a*b* and content of intramuscular fat (IMF) of pork was evaluated. Images of pork chop from 211 pig carcasses were captured while samples were on a conveyor belt at the speed of 0.25 m•s-1 to simulate the on-line environment. CIE L*a*b* and IMF content were measured with colorimeter and chemical extractor as reference. The KSW algorithm combined with region selection was employed in eliminating the surrounding fat of longissimus dorsi muscle (MLD). RGB values of the pork were counted and five methods were applied for transforming RGB values to CIE L*a*b* values. The region growing algorithm with multiple seed points was applied to mask out the IMF pixels within the intensity corrected images. The performances of the proposed algorithms were verified by comparing the measured reference values and the quality characteristics obtained by image processing. MLD region of six samples could not be identified using the KSW algorithm. Intensity nonuniformity of pork surface in the image can be eliminated efficiently, and IMF region of three corrected images failed to be extracted. Given considerable variety of color and complexity of the pork surface, CIE L*, a* and b* color of MLD could be predicted with correlation coefficients of 0.84, 0.54 and 0.47 respectively, and IMF content could be determined with a correlation coefficient more than 0.70. The study demonstrated that it is feasible to evaluate CIE L*a*b* values and IMF content on-line using computer vision.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
The contour-buildup algorithm to calculate the analytical molecular surface.
Totrov, M; Abagyan, R
1996-01-01
A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, V; Jacobs, MA
Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI)more » and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRaGe algorithm at automatically discovering useful feature representations directly from the raw multiparametric MRI data. In conclusion, the MIRaGe informatics model provides a powerful tool with applicability in cancer diagnosis and a possibility of extension to other kinds of pathologies. NIH (P50CA103175, 5P30CA006973 (IRAT), R01CA190299, U01CA140204), Siemens Medical Systems (JHU-2012-MR-86-01) and Nivida Graphics Corporation.« less
Asteroid (21) Lutetia: Semi-Automatic Impact Craters Detection and Classification
NASA Astrophysics Data System (ADS)
Jenerowicz, M.; Banaszkiewicz, M.
2018-05-01
The need to develop an automated method, independent of lighting and surface conditions, for the identification and measurement of impact craters, as well as the creation of a reliable and efficient tool, has become a justification of our studies. This paper presents a methodology for the detection of impact craters based on their spectral and spatial features. The analysis aims at evaluation of the algorithm capabilities to determinate the spatial parameters of impact craters presented in a time series. In this way, time-consuming visual interpretation of images would be reduced to the special cases. The developed algorithm is tested on a set of OSIRIS high resolution images of asteroid Lutetia surface which is characterized by varied landforms and the abundance of craters created by collisions with smaller bodies of the solar system.The proposed methodology consists of three main steps: characterisation of objects of interest on limited set of data, semi-automatic extraction of impact craters performed for total set of data by applying the Mathematical Morphology image processing (Serra, 1988, Soille, 2003), and finally, creating libraries of spatial and spectral parameters for extracted impact craters, i.e. the coordinates of the crater center, semi-major and semi-minor axis, shadow length and cross-section. The overall accuracy of the proposed method is 98 %, the Kappa coefficient is 0.84, the correlation coefficient is ∼ 0.80, the omission error 24.11 %, the commission error 3.45 %. The obtained results show that methods based on Mathematical Morphology operators are effective also with a limited number of data and low-contrast images.
Quantum algorithms for topological and geometric analysis of data
Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo
2016-01-01
Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491
Research on improved edge extraction algorithm of rectangular piece
NASA Astrophysics Data System (ADS)
He, Yi-Bin; Zeng, Ya-Jun; Chen, Han-Xin; Xiao, San-Xia; Wang, Yan-Wei; Huang, Si-Yu
Traditional edge detection operators such as Prewitt operator, LOG operator and Canny operator, etc. cannot meet the requirements of the modern industrial measurement. This paper proposes a kind of image edge detection algorithm based on improved morphological gradient. It can be detect the image using structural elements, which deals with the characteristic information of the image directly. Choosing different shapes and sizes of structural elements to use together, the ideal image edge information can be detected. The experimental result shows that the algorithm can well extract image edge with noise, which is clearer, and has more detailed edges compared with the previous edge detection algorithm.
Point coordinates extraction from localized hyperbolic reflections in GPR data
NASA Astrophysics Data System (ADS)
Ristić, Aleksandar; Bugarinović, Željko; Vrtunski, Milan; Govedarica, Miro
2017-09-01
In this paper, we propose an automated detection algorithm for the localization of apexes and points on the prongs of hyperbolic reflection incurred as a result of GPR scanning technology. The objects of interest encompass cylindrical underground utilities that have a distinctive form of hyperbolic reflection in radargram. Algorithm involves application of trained neural network to analyze radargram in the form of raster image, resulting with extracted segments of interest that contain hyperbolic reflections. This significantly reduces the amount of data for further analysis. Extracted segments represent the zone for localization of apices. This is followed by extraction of points on prongs of hyperbolic reflections which is carried out until stopping criterion is satisfied, regardless the borders of segment of interest. In final step a classification of false hyperbolic reflections caused by the constructive interference and their removal is done. The algorithm is implemented in MATLAB environment. There are several advantages of the proposed algorithm. It can successfully recognize true hyperbolic reflections in radargram images and extracts coordinates, with very low rate of false detections and without prior knowledge about the number of hyperbolic reflections or buried utilities. It can be applied to radargrams containing single and multiple hyperbolic reflections, intersected, distorted, as well as incomplete (asymmetric) hyperbolic reflections, all in the presence of higher level of noise. Special feature of algorithm is developed procedure for analysis and removal of false hyperbolic reflections generated by the constructive interference from reflectors associated with the utilities. Algorithm was tested on a number of synthetic and radargram acquired in the field survey. To illustrate the performances of the proposed algorithm, we present the characteristics of the algorithm through five representative radargrams obtained in real conditions. In these examples we present different acquisition scenarios by varying the number of buried objects, their disposition, size, and level of noise. Example with highest complexity was tested also as a synthetic radargram generated by gprMax. Processing time in examples with one or two hyperbolic reflections is from 0.1 to 0.3 s, while for the most complex examples it is from 2.2 to 4.9 s. In general, the obtained experimental results show that the proposed algorithm exhibits promising performances both in terms of utility detection and processing speed of the algorithm.
ID card number detection algorithm based on convolutional neural network
NASA Astrophysics Data System (ADS)
Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan
2018-04-01
In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.
Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data
NASA Astrophysics Data System (ADS)
Parida, G.; Rajan, K. S.
2017-05-01
The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.
NASA Astrophysics Data System (ADS)
AllahTavakoli, Yahya; Safari, Abdolreza; Vaníček, Petr
2016-12-01
This paper resurrects a version of Poisson's Partial Differential Equation (PDE) associated with the gravitational field at the Earth's surface and illustrates how the PDE possesses a capability to extract the mass density of Earth's topography from land-based gravity data. Herein, first we propound a theorem which mathematically introduces this version of Poisson's PDE adapted for the Earth's surface and then we use this PDE to develop a method of approximating the terrain mass density. Also, we carry out a real case study showing how the proposed approach is able to be applied to a set of land-based gravity data. In the case study, the method is summarized by an algorithm and applied to a set of gravity stations located along a part of the north coast of the Persian Gulf in the south of Iran. The results were numerically validated via rock-samplings as well as a geological map. Also, the method was compared with two conventional methods of mass density reduction. The numerical experiments indicate that the Poisson PDE at the Earth's surface has the capability to extract the mass density from land-based gravity data and is able to provide an alternative and somewhat more precise method of estimating the terrain mass density.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Spore, N.; Brodie, K. L.; Swann, C.
2014-12-01
Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
Least significant qubit algorithm for quantum images
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Li, Qiong
2016-11-01
To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.
NASA Astrophysics Data System (ADS)
Namani, Ravi
Mechanical properties are essential for understanding diseases that afflict various soft tissues, such as osteoarthritic cartilage and hypertension which alters cardiovascular arteries. Although the linear elastic modulus is routinely measured for hard materials, standard methods are not available for extracting the nonlinear elastic, linear elastic and time-dependent properties of soft tissues. Consequently, the focus of this work is to develop indentation methods for soft biological tissues; since analytical solutions are not available for the general context, finite element simulations are used. First, parametric studies of finite indentation of hyperelastic layers are performed to examine if indentation has the potential to identify nonlinear elastic behavior. To answer this, spherical, flat-ended conical and cylindrical tips are examined and the influence of thickness is exploited. Also the influence of the specimen/substrate boundary condition (slip or non-slip) is clarified. Second, a new inverse method---the hyperelastic extraction algorithm (HPE)---was developed to extract two nonlinear elastic parameters from the indentation force-depth data, which is the basic measurement in an indentation test. The accuracy of the extracted parameters and the influence of noise in measurements on this accuracy were obtained. This showed that the standard Berkovitch tip could only extract one parameter with sufficient accuracy, since the indentation force-depth curve has limited sensitivity to both nonlinear elastic parameters. Third, indentation methods for testing tissues from small animals were explored. New methods for flat-ended conical tips are derived. These account for practical test issues like the difficulty in locating the surface or soft specimens. Also, finite element simulations are explored to elucidate the influence of specimen curvature on the indentation force-depth curve. Fourth, the influence of inhomogeneity and material anisotropy on the extracted "average" linear elastic modulus was studied. The focus here is on murine tibial cartilage, since recent experiments have shown that the modulus measured by a 15 mum tip is considerably larger than that obtained from a 90 mum tip. It is shown that a depth-dependent modulus could give rise to such a size effect. Lastly, parametric studies were performed within the small strain setting to understand the influence of permeability and viscoelastic properties on the indentation stress-relaxation response. The focus here is on cartilage, and specific test protocols (single-step vs. multi-step stress relaxation) are explored. An inverse algorithm was developed to extract the poroviscoelastic parameters. A sensitivity study using this algorithm shows that the instantaneous elastic modulus (which is a measure of the viscous relaxation) can be extracted with very good accuracy, but the permeability and long-time relaxation constant cannot be extracted with good accuracy. The thesis concludes with implications of these studies. The potential and limitations of indentation tests for studying cartilage and other soft tissues is discussed.
Buried landmine detection using multivariate normal clustering
NASA Astrophysics Data System (ADS)
Duston, Brian M.
2001-10-01
A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.
Zhang, Liguo; Sun, Jianguo; Yin, Guisheng; Zhao, Jing; Han, Qilong
2015-01-01
In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. PMID:26110403
Survey and analysis of multiresolution methods for turbulence data
Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; ...
2015-11-10
This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 512 3 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Re t = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisonsmore » between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less
A satellite AOT derived from the ground sky transmittance measurements
NASA Astrophysics Data System (ADS)
Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Tan, K. C.; Wong, C. J.; Saleh, N. Mohd.
2008-10-01
The optical properties of aerosols such as smoke from burning vary due to aging processes and these particles reach larger sizes at high concentrations. The objectives of this study are to develop and evaluate an algorithm for estimating atmospheric optical thickness from Landsat TM image. This study measured the sky transmittance at the ground using a handheld spectroradiometer in a wide wavelength spectrum to retrieve atmospheric optical thickness. The in situ measurement of atmospheric transmittance data were collected simultaneously with the acquisition of remotely sensed satellite data. The digital numbers for the three visible bands corresponding to the in situ locations were extracted and then converted into reflectance values. The reflectance measured from the satellite was subtracted by the amount given by the surface reflectance to obtain the atmospheric reflectance. These atmospheric reflectance values were used for calibration of the AOT algorithm. This study developed an empirical method to estimate the AOT values from the sky transmittance values. Finally, a AOT map was generated using the proposed algorithm and colour-coded for visual interpretation.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization
Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun
2015-01-01
The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306
Feature extraction algorithm for space targets based on fractal theory
NASA Astrophysics Data System (ADS)
Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin
2007-11-01
In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.
Multimodal correlation and intraoperative matching of virtual models in neurosurgery
NASA Technical Reports Server (NTRS)
Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo
1994-01-01
The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.
NASA Technical Reports Server (NTRS)
Watson, Ken; Hummer-Miller, Susanne; Kruse, Fred A.
1986-01-01
A theoretical radiance model was employed together with laboratory data on a suite of igneous rock to evaluate various algorithms for processing Thermal Infrared Multispectral Scanner (TIMS) data. Two aspects of the general problem were examined: extraction of emissivity information from the observed TIMS radiance data, and how to use emissivity data in a way that is geologically meaningful. The four algorithms were evaluated for appropriate band combinations of TIMS data acquired on both day and night overflights of the Tuscarora Mountains, including the Carlin gold deposit, in north-central Nevada. Analysis of a color composited PC decorrelated image (Bands 3, 4, 5--blue/green/red) of the Northern Grapevine Mountains, Nevada, area showed some useful correlation with the regional geology. The thermal infrared region provides fundamental spectral information that can be used to discriminate the major rock types occurring on the Earth's surface.
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
NASA Astrophysics Data System (ADS)
Xu, Xiaoqing; Wang, Yawei; Ji, Ying; Xu, Yuanyuan; Xie, Ming
2018-01-01
A new method to extract quantitative phases for each wavelength from three-wavelength in-line phase-shifting interferograms is proposed. Firstly, seven interferograms with positive negative 2π phase shifts are sequentially captured by using the phase-shifting technique. Secondly, six dc-term suppressed intensities can be achieved by the use of the algebraic algorithm. Finally, the wrapped phases at the three wavelengths can be acquired simultaneously from these six interferograms add-subtracting by employing the trigonometric function method. The surface morphology with increased ambiguity-free range at synthetic beat wavelength can be obtained, while maintaining the low noise precision of the single wavelength measurement, by combining this method with three-wavelength phase unwrapping method. We illustrate the principle of this algorithm, and the simulated experiments of the spherical cap and the HeLa cell are conducted to prove our proposed method, respectively.
Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebotich, D
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less
Modeling complex biological flows in multi-scale systems using the APDEC framework
NASA Astrophysics Data System (ADS)
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Citrus fruit recognition using color image analysis
NASA Astrophysics Data System (ADS)
Xu, Huirong; Ying, Yibin
2004-10-01
An algorithm for the automatic recognition of citrus fruit on the tree was developed. Citrus fruits have different color with leaves and branches portions. Fifty-three color images with natural citrus-grove scenes were digitized and analyzed for red, green, and blue (RGB) color content. The color characteristics of target surfaces (fruits, leaves, or branches) were extracted using the range of interest (ROI) tool. Several types of contrast color indices were designed and tested. In this study, the fruit image was enhanced using the (R-B) contrast color index because results show that the fruit have the highest color difference among the objects in the image. A dynamic threshold function was derived from this color model and used to distinguish citrus fruit from background. The results show that the algorithm worked well under frontlighting or backlighting condition. However, there are misclassifications when the fruit or the background is under a brighter sunlight.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Skeletonization with hollow detection on gray image by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Bhattacharya, Prabir; Qian, Kai; Cao, Siqi; Qian, Yi
1998-10-01
A skeletonization algorithm that could be used to process non-uniformly distributed gray-scale images with hollows was presented. This algorithm is based on the Gray Weighted Distance Transformation. The process includes a preliminary phase of investigation in the hollows in the gray-scale image, whether these hollows are considered as topological constraints for the skeleton structure depending on their statistically significant depth. We then extract the resulting skeleton that has certain meaningful information for understanding the object in the image. This improved algorithm can overcome the possible misinterpretation of some complicated images in the extracted skeleton, especially in images with asymmetric hollows and asymmetric features. This algorithm can be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
A Spiking Neural Network in sEMG Feature Extraction.
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-11-03
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
Cai, Tianxi; Karlson, Elizabeth W.
2013-01-01
Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Rohl, Sebastian; Bodenstedt, Sebastian; Suwelack, Stefan; Dillmann, Rudiger; Speidel, Stefanie; Kenngott, Hannes; Muller-Stich, Beat P
2012-03-01
In laparoscopic surgery, soft tissue deformations substantially change the surgical site, thus impeding the use of preoperative planning during intraoperative navigation. Extracting depth information from endoscopic images and building a surface model of the surgical field-of-view is one way to represent this constantly deforming environment. The information can then be used for intraoperative registration. Stereo reconstruction is a typical problem within computer vision. However, most of the available methods do not fulfill the specific requirements in a minimally invasive setting such as the need of real-time performance, the problem of view-dependent specular reflections and large curved areas with partly homogeneous or periodic textures and occlusions. In this paper, the authors present an approach toward intraoperative surface reconstruction based on stereo endoscopic images. The authors describe our answer to this problem through correspondence analysis, disparity correction and refinement, 3D reconstruction, point cloud smoothing and meshing. Real-time performance is achieved by implementing the algorithms on the gpu. The authors also present a new hybrid cpu-gpu algorithm that unifies the advantages of the cpu and the gpu version. In a comprehensive evaluation using in vivo data, in silico data from the literature and virtual data from a newly developed simulation environment, the cpu, the gpu, and the hybrid cpu-gpu versions of the surface reconstruction are compared to a cpu and a gpu algorithm from the literature. The recommended approach toward intraoperative surface reconstruction can be conducted in real-time depending on the image resolution (20 fps for the gpu and 14fps for the hybrid cpu-gpu version on resolution of 640 × 480). It is robust to homogeneous regions without texture, large image changes, noise or errors from camera calibration, and it reconstructs the surface down to sub millimeter accuracy. In all the experiments within the simulation environment, the mean distance to ground truth data is between 0.05 and 0.6 mm for the hybrid cpu-gpu version. The hybrid cpu-gpu algorithm shows a much more superior performance than its cpu and gpu counterpart (mean distance reduction 26% and 45%, respectively, for the experiments in the simulation environment). The recommended approach for surface reconstruction is fast, robust, and accurate. It can represent changes in the intraoperative environment and can be used to adapt a preoperative model within the surgical site by registration of these two models.
3D shape decomposition and comparison for gallbladder modeling
NASA Astrophysics Data System (ADS)
Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen
2011-03-01
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
Kernel-based discriminant feature extraction using a representative dataset
NASA Astrophysics Data System (ADS)
Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.
2002-07-01
Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.
NASA Astrophysics Data System (ADS)
Yin, Xin; Liu, Aiping; Thornburg, Kent L.; Wang, Ruikang K.; Rugonyi, Sandra
2012-09-01
Recent advances in optical coherence tomography (OCT), and the development of image reconstruction algorithms, enabled four-dimensional (4-D) (three-dimensional imaging over time) imaging of the embryonic heart. To further analyze and quantify the dynamics of cardiac beating, segmentation procedures that can extract the shape of the heart and its motion are needed. Most previous studies analyzed cardiac image sequences using manually extracted shapes and measurements. However, this is time consuming and subject to inter-operator variability. Automated or semi-automated analyses of 4-D cardiac OCT images, although very desirable, are also extremely challenging. This work proposes a robust algorithm to semi automatically detect and track cardiac tissue layers from 4-D OCT images of early (tubular) embryonic hearts. Our algorithm uses a two-dimensional (2-D) deformable double-line model (DLM) to detect target cardiac tissues. The detection algorithm uses a maximum-likelihood estimator and was successfully applied to 4-D in vivo OCT images of the heart outflow tract of day three chicken embryos. The extracted shapes captured the dynamics of the chick embryonic heart outflow tract wall, enabling further analysis of cardiac motion.
NASA Astrophysics Data System (ADS)
Hild, Kenneth E.; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia
2007-01-01
In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction.
Bueschl, Christoph; Kluger, Bernhard; Berthiller, Franz; Lirk, Gerald; Winkler, Stephan; Krska, Rudolf; Schuhmacher, Rainer
2012-03-01
Liquid chromatography-mass spectrometry (LC/MS) is a key technique in metabolomics. Since the efficient assignment of MS signals to true biological metabolites becomes feasible in combination with in vivo stable isotopic labelling, our aim was to provide a new software tool for this purpose. An algorithm and a program (MetExtract) have been developed to search for metabolites in in vivo labelled biological samples. The algorithm makes use of the chromatographic characteristics of the LC/MS data and detects MS peaks fulfilling the criteria of stable isotopic labelling. As a result of all calculations, the algorithm specifies a list of m/z values, the corresponding number of atoms of the labelling element (e.g. carbon) together with retention time and extracted adduct-, fragment- and polymer ions. Its function was evaluated using native (12)C- and uniformly (13)C-labelled standard substances. MetExtract is available free of charge and warranty at http://code.google.com/p/metextract/. Precompiled executables are available for Windows operating systems. Supplementary data are available at Bioinformatics online.
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
Agricultural mapping using Support Vector Machine-Based Endmember Extraction (SVM-BEE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K; Filippi, Anthony M; Bhaduri, Budhendra L
Extracting endmembers from remotely sensed images of vegetated areas can present difficulties. In this research, we applied a recently developed endmember-extraction algorithm based on Support Vector Machines (SVMs) to the problem of semi-autonomous estimation of vegetation endmembers from a hyperspectral image. This algorithm, referred to as Support Vector Machine-Based Endmember Extraction (SVM-BEE), accurately and rapidly yields a computed representation of hyperspectral data that can accommodate multiple distributions. The number of distributions is identified without prior knowledge, based upon this representation. Prior work established that SVM-BEE is robustly noise-tolerant and can semi-automatically and effectively estimate endmembers; synthetic data and a geologicmore » scene were previously analyzed. Here we compared the efficacies of the SVM-BEE and N-FINDR algorithms in extracting endmembers from a predominantly agricultural scene. SVM-BEE was able to estimate vegetation and other endmembers for all classes in the image, which N-FINDR failed to do. Classifications based on SVM-BEE endmembers were markedly more accurate compared with those based on N-FINDR endmembers.« less
NASA Astrophysics Data System (ADS)
Greenway, D. P.; Hackett, E.
2017-12-01
Under certain atmospheric refractivity conditions, propagated electromagnetic waves (EM) can become trapped between the surface and the bottom of the atmosphere's mixed layer, which is referred to as surface duct propagation. Being able to predict the presence of these surface ducts can reap many benefits to users and developers of sensing technologies and communication systems because they significantly influence the performance of these systems. However, the ability to directly measure or model a surface ducting layer is challenging due to the high spatial resolution and large spatial coverage needed to make accurate refractivity estimates for EM propagation; thus, inverse methods have become an increasingly popular way of determining atmospheric refractivity. This study uses data from the Coupled Ocean/Atmosphere Mesoscale Prediction System developed by the Naval Research Laboratory and instrumented helicopter (helo) measurements taken during the Wallops Island Field Experiment to evaluate the use of ensemble forecasts in refractivity inversions. Helo measurements and ensemble forecasts are optimized to a parametric refractivity model, and three experiments are performed to evaluate whether incorporation of ensemble forecast data aids in more timely and accurate inverse solutions using genetic algorithms. The results suggest that using optimized ensemble members as an initial population for the genetic algorithms generally enhances the accuracy and speed of the inverse solution; however, use of the ensemble data to restrict parameter search space yields mixed results. Inaccurate results are related to parameterization of the ensemble members' refractivity profile and the subsequent extraction of the parameter ranges to limit the search space.
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-05-06
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.
Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.
Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian
2017-09-27
Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.
A novel algorithm for determining contact area between a respirator and a headform.
Lei, Zhipeng; Yang, James; Zhuang, Ziqing
2014-01-01
The contact area, as well as the contact pressure, is created when a respiratory protection device (a respirator or surgical mask) contacts a human face. A computer-based algorithm for determining the contact area between a headform and N95 filtering facepiece respirator (FFR) was proposed. Six N95 FFRs were applied to five sizes of standard headforms (large, medium, small, long/narrow, and short/wide) to simulate respirator donning. After the contact simulation between a headform and an N95 FFR was conducted, a contact area was determined by extracting the intersection surfaces of the headform and the N95 FFR. Using computer-aided design tools, a superimposed contact area and an average contact area, which are non-uniform rational basis spline (NURBS) surfaces, were developed for each headform. Experiments that directly measured dimensions of the contact areas between headform prototypes and N95 FFRs were used to validate the simulation results. Headform sizes influenced all contact area dimensions (P < 0.0001), and N95 FFR sizing systems influenced all contact area dimensions (P < 0.05) except the left and right chin regions. The medium headform produced the largest contact area, while the large and small headforms produced the smallest.
Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra
Ryden, N.; Park, C.B.
2006-01-01
The conventional inversion of surface waves depends on modal identification of measured dispersion curves, which can be ambiguous. It is possible to avoid mode-number identification and extraction by inverting the complete phase-velocity spectrum obtained from a multichannel record. We use the fast simulated annealing (FSA) global search algorithm to minimize the difference between the measured phase-velocity spectrum and that calculated from a theoretical layer model, including the field setup geometry. Results show that this algorithm can help one avoid getting trapped in local minima while searching for the best-matching layer model. The entire procedure is demonstrated on synthetic and field data for asphalt pavement. The viscoelastic properties of the top asphalt layer are taken into account, and the inverted asphalt stiffness as a function of frequency compares well with laboratory tests on core samples. The thickness and shear-wave velocity of the deeper embedded layers are resolved within 10% deviation from those values measured separately during pavement construction. The proposed method may be equally applicable to normal soil site investigation and in the field of ultrasonic testing of materials. ?? 2006 Society of Exploration Geophysicists.
Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka
2017-09-01
Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery
NASA Astrophysics Data System (ADS)
Pozdin, Maksym A.; Skrinjar, Oskar
2005-04-01
This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.
Povšič, K; Jezeršek, M; Možina, J
2015-07-01
Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be ±0.02 dm(3) for torsional deformation extraction and ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.
[Skeleton extractions and applications].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan
2010-05-01
This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at leastmore » two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.« less
Dil, Ebrahim Alipanahpour; Ghaedi, Mehrorang; Asfaram, Arash; Mehrabi, Fatemeh; Bazrafshan, Ali Akbar; Ghaedi, Abdol Mohammad
2016-11-01
In this study, ultrasound assisted dispersive solid-phase micro extraction combined with spectrophotometry (USA-DSPME-UV) method based on activated carbon modified with Fe2O3 nanoparticles (Fe2O3-NPs-AC) was developed for pre-concentration and determination of safranin O (SO). It is known that the efficiency of USA-DSPME-UV method may be affected by pH, amount of adsorbent, ultrasound time and eluent volume and the extent and magnitude of their contribution on response (in term of main and interaction part) was studied by using central composite design (CCD) and artificial neural network-genetic algorithms (ANN-GA). Accordingly by adjustment of experimental conditions suggested by ANN-GA at pH 6.5, 1.1mg of adsorbent, 10min ultrasound and 150μL of eluent volume led to achievement of best operation performance like low LOD (6.3ngmL(-1)) and LOQ (17.5ngmL(-1)) in the range of 25-3500ngmL(-1). In following stage, the SO content in real water and wastewater samples with recoveries between 93.27-99.41% with RSD lower than 3% was successfully determined. Copyright © 2016 Elsevier B.V. All rights reserved.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2007-01-01
Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.
Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V
2018-04-17
Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
NASA Astrophysics Data System (ADS)
Fubiani, G.; Garrigues, L.; Boeuf, J. P.
2018-02-01
We model the extraction of negative ions from a high brightness high power magnetized negative ion source. The model is a Particle-In-Cell (PIC) algorithm with Monte-Carlo Collisions. The negative ions are generated only on the plasma grid surface (which separates the plasma from the electrostatic accelerator downstream). The scope of this work is to derive scaling laws for the negative ion beam properties versus the extraction voltage (potential of the first grid of the accelerator) and plasma density and investigate the origins of aberrations on the ion beam. We show that a given value of the negative ion beam perveance correlates rather well with the beam profile on the extraction grid independent of the simulated plasma density. Furthermore, the extracted beam current may be scaled to any value of the plasma density. The scaling factor must be derived numerically but the overall gain of computational cost compared to performing a PIC simulation at the real plasma density is significant. Aberrations appear for a meniscus curvature radius of the order of the radius of the grid aperture. These aberrations cannot be cancelled out by switching to a chamfered grid aperture (as in the case of positive ions).
Wege, H A; Holgado-Terriza, J A; Cabrerizo-Vílchez, M A
2002-05-15
A new constant pressure pendant-drop penetration surface balance has been developed combining a pendant-drop surface balance, a rapid-subphase-exchange technique, and a fuzzy logic control algorithm. Beside the determination of insoluble monolayer compression-expansion isotherms, it allows performance of noninvasive kinetic studies of the adsorption of surfactants added to the new subphase onto the free surface and of the adsorption/penetration/reaction of the former onto/into/with surface layers, respectively. The interfacial pressure pi is a fundamental parameter in these studies: by working at constant pi one controls the height of the energy barrier to adsorption/penetration and can select different regimes and steps of the adsorption/penetration process. In our device a solution drop is formed at the tip of a coaxial double capillary, connected to a double microinjector. Drop profiles are extracted from digital drop micrographs and fitted to the equation of capillarity, yielding pi, the drop volume V, and the interfacial area A. pi is varied changing V (and hence A) with the microinjector. Control is based on a case-adaptable modulated fuzzy-logic PID algorithm able to maintain constant pi (or A) under a wide range of experimental conditions. The drop subphase liquid can be exchanged quantitatively by the coaxial capillaries. The adsorption/penetration/reaction kinetics at constant pi are then studied monitoring A(t), i.e., determining the relative area change necessary at each instant to compensate the pressure variation due to the interaction of the surfactant in the subsurface with the surface layer. A fully Windows-integrated program manages the whole setup. Examples of experimental protein adsorption and monolayer penetration kinetics are presented.
Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data
NASA Astrophysics Data System (ADS)
Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.
2016-06-01
Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.
An image-guided tool to prevent hospital acquired infections
NASA Astrophysics Data System (ADS)
Nagy, Melinda; Szilágyi, László; Lehotsky, Ákos; Haidegger, Tamás; Benyó, Balázs
2011-03-01
Hospital Acquired Infections (HAI) represent the fourth leading cause of death in the United States, and claims hundreds of thousands of lives annually in the rest of the world. This paper presents a novel low-cost mobile device|called Stery-Hand|that helps to avoid HAI by improving hand hygiene control through providing an objective evaluation of the quality of hand washing. The use of the system is intuitive: having performed hand washing with a soap mixed with UV re ective powder, the skin appears brighter in UV illumination on the disinfected surfaces. Washed hands are inserted into the Stery-Hand box, where a digital image is taken under UV lighting. Automated image processing algorithms are employed in three steps to evaluate the quality of hand washing. First, the contour of the hand is extracted in order to distinguish the hand from the background. Next, a semi-supervised clustering algorithm classies the pixels of the hand into three groups, corresponding to clean, partially clean and dirty areas. The clustering algorithm is derived from the histogram-based quick fuzzy c-means approach, using a priori information extracted from reference images, evaluated by experts. Finally, the identied areas are adjusted to suppress shading eects, and quantied in order to give a verdict on hand disinfection quality. The proposed methodology was validated through tests using hundreds of images recorded in our laboratory. The proposed system was found robust and accurate, producing correct estimation for over 98% of the test cases. Stery-Hand may be employed in general practice, and it may also serve educational purposes.
Accurate 3-D Profile Extraction of Skull Bone Using an Ultrasound Matrix Array.
Hajian, Mehdi; Gaspar, Robert; Maev, Roman Gr
2017-12-01
The present study investigates the feasibility, accuracy, and precision of 3-D profile extraction of the human skull bone using a custom-designed ultrasound matrix transducer in Pulse-Echo. Due to the attenuative scattering properties of the skull, the backscattered echoes from the inner surface of the skull are severely degraded, attenuated, and at some points overlapped. Furthermore, the speed of sound (SOS) in the skull varies significantly in different zones and also from case to case; if considered constant, it introduces significant error to the profile measurement. A new method for simultaneous estimation of the skull profiles and the sound speed value is presented. The proposed method is a two-folded procedure: first, the arrival times of the backscattered echoes from the skull bone are estimated using multi-lag phase delay (MLPD) and modified space alternating generalized expectation maximization (SAGE) algorithms. Next, these arrival times are fed into an adaptive sound speed estimation algorithm to compute the optimal SOS value and subsequently, the skull bone thickness. For quantitative evaluation, the estimated bone phantom thicknesses were compared with the mechanical measurements. The accuracies of the bone thickness measurements using MLPD and modified SAGE algorithms combined with the adaptive SOS estimation were 7.93% and 4.21%, respectively. These values were 14.44% and 10.75% for the autocorrelation and cross-correlation methods. Additionally, the Bland-Altman plots showed the modified SAGE outperformed the other methods with -0.35 and 0.44 mm limits of agreement. No systematic error that could be related to the skull bone thickness was observed for this method.
NASA Technical Reports Server (NTRS)
Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.
2000-01-01
We present two algorithms for the separation of spectral features caused by atmospheric and surface components in Thermal Emission Spectrometer (TES) data. One algorithm uses radiative transfer and successive least squares fitting to find spectral shapes first for atmospheric dust, then for water-ice aerosols, and then, finally, for surface emissivity. A second independent algorithm uses a combination of factor analysis, target transformation, and deconvolution to simultaneously find dust, water ice, and surface emissivity spectral shapes. Both algorithms have been applied to TES spectra, and both find very similar atmospheric and surface spectral shapes. For TES spectra taken during aerobraking and science phasing periods in nadir-geometry these two algorithms give meaningful and usable surface emissivity spectra that can be used for mineralogical identification.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Forest Roadidentification and Extractionof Through Advanced Log Matching Techniques
NASA Astrophysics Data System (ADS)
Zhang, W.; Hu, B.; Quist, L.
2017-10-01
A novel algorithm for forest road identification and extraction was developed. The algorithm utilized Laplacian of Gaussian (LoG) filter and slope calculation on high resolution multispectral imagery and LiDAR data respectively to extract both primary road and secondary road segments in the forest area. The proposed method used road shape feature to extract the road segments, which have been further processed as objects with orientation preserved. The road network was generated after post processing with tensor voting. The proposed method was tested on Hearst forest, located in central Ontario, Canada. Based on visual examination against manually digitized roads, the majority of roads from the test area have been identified and extracted from the process.
Machine learning in soil classification.
Bhattacharya, B; Solomatine, D P
2006-03-01
In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained.
Computational screening of biomolecular adsorption and self-assembly on nanoscale surfaces.
Heinz, Hendrik
2010-05-01
The quantification of binding properties of ions, surfactants, biopolymers, and other macromolecules to nanometer-scale surfaces is often difficult experimentally and a recurring challenge in molecular simulation. A simple and computationally efficient method is introduced to compute quantitatively the energy of adsorption of solute molecules on a given surface. Highly accurate summation of Coulomb energies as well as precise control of temperature and pressure is required to extract the small energy differences in complex environments characterized by a large total energy. The method involves the simulation of four systems, the surface-solute-solvent system, the solute-solvent system, the solvent system, and the surface-solvent system under consideration of equal molecular volumes of each component under NVT conditions using standard molecular dynamics or Monte Carlo algorithms. Particularly in chemically detailed systems including thousands of explicit solvent molecules and specific concentrations of ions and organic solutes, the method takes into account the effect of complex nonbond interactions and rotational isomeric states on the adsorption behavior on surfaces. As a numerical example, the adsorption of a dodecapeptide on the Au {111} and mica {001} surfaces is described in aqueous solution. Copyright 2009 Wiley Periodicals, Inc.
Javidi, Soroush; Mandic, Danilo P.; Took, Clive Cheong; Cichocki, Andrzej
2011-01-01
A new class of complex domain blind source extraction algorithms suitable for the extraction of both circular and non-circular complex signals is proposed. This is achieved through sequential extraction based on the degree of kurtosis and in the presence of non-circular measurement noise. The existence and uniqueness analysis of the solution is followed by a study of fast converging variants of the algorithm. The performance is first assessed through simulations on well understood benchmark signals, followed by a case study on real-time artifact removal from EEG signals, verified using both qualitative and quantitative metrics. The results illustrate the power of the proposed approach in real-time blind extraction of general complex-valued sources. PMID:22319461
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
Filter bank common spatial patterns in mental workload estimation.
Arvaneh, Mahnaz; Umilta, Alberto; Robertson, Ian H
2015-01-01
EEG-based workload estimation technology provides a real time means of assessing mental workload. Such technology can effectively enhance the performance of the human-machine interaction and the learning process. When designing workload estimation algorithms, a crucial signal processing component is the feature extraction step. Despite several studies on this field, the spatial properties of the EEG signals were mostly neglected. Since EEG inherently has a poor spacial resolution, features extracted individually from each EEG channel may not be sufficiently efficient. This problem becomes more pronounced when we use low-cost but convenient EEG sensors with limited stability which is the case in practical scenarios. To address this issue, in this paper, we introduce a filter bank common spatial patterns algorithm combined with a feature selection method to extract spatio-spectral features discriminating different mental workload levels. To evaluate the proposed algorithm, we carry out a comparative analysis between two representative types of working memory tasks using data recorded from an Emotiv EPOC headset which is a mobile low-cost EEG recording device. The experimental results showed that the proposed spatial filtering algorithm outperformed the state-of-the algorithms in terms of the classification accuracy.
A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm
NASA Astrophysics Data System (ADS)
Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting
2017-10-01
The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.
An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning.
Yang, Geng; Deng, Jia; Pang, Gaoyang; Zhang, Hao; Li, Jiayi; Deng, Bin; Pang, Zhibo; Xu, Juan; Jiang, Mingzhe; Liljeberg, Pasi; Xie, Haibo; Yang, Huayong
2018-01-01
Surface electromyography signal plays an important role in hand function recovery training. In this paper, an IoT-enabled stroke rehabilitation system was introduced which was based on a smart wearable armband (SWA), machine learning (ML) algorithms, and a 3-D printed dexterous robot hand. User comfort is one of the key issues which should be addressed for wearable devices. The SWA was developed by integrating a low-power and tiny-sized IoT sensing device with textile electrodes, which can measure, pre-process, and wirelessly transmit bio-potential signals. By evenly distributing surface electrodes over user's forearm, drawbacks of classification accuracy poor performance can be mitigated. A new method was put forward to find the optimal feature set. ML algorithms were leveraged to analyze and discriminate features of different hand movements, and their performances were appraised by classification complexity estimating algorithms and principal components analysis. According to the verification results, all nine gestures can be successfully identified with an average accuracy up to 96.20%. In addition, a 3-D printed five-finger robot hand was implemented for hand rehabilitation training purpose. Correspondingly, user's hand movement intentions were extracted and converted into a series of commands which were used to drive motors assembled inside the dexterous robot hand. As a result, the dexterous robot hand can mimic the user's gesture in a real-time manner, which shows the proposed system can be used as a training tool to facilitate rehabilitation process for the patients after stroke.
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
Road detection in SAR images using a tensor voting algorithm
NASA Astrophysics Data System (ADS)
Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian
2007-11-01
In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Lee, Jinseok; Chon, Ki H
2010-09-01
We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.
Sensor feature fusion for detecting buried objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.
1993-04-01
Given multiple registered images of the earth`s surface from dual-band sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. The sensor suite currently includes two sensors (5 micron and 10 micron wavelengths) and one ground penetrating radar (GPR) of the wide-band pulsed synthetic aperture type. We use a supervised teaming pattern recognition approach to detect metal and plastic land mines buried in soil. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in amore » two step process to classify a subimage. Thee first step, referred to as feature selection, determines the features of sub-images which result in the greatest separability among the classes. The second step, image labeling, uses the selected features and the decisions from a pattern classifier to label the regions in the image which are likely to correspond to buried mines. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the sensors add value to the detection system. The most important features from the various sensors are fused using supervised teaming pattern classifiers (including neural networks). We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing feature information from multiple sensor types, including dual-band infrared and ground penetrating radar. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved operational problem of detecting buried land mines from an airborne standoff platform.« less
CAD system for footwear design based on whole real 3D data of last surface
NASA Astrophysics Data System (ADS)
Song, Wanzhong; Su, Xianyu
2000-10-01
Two major parts of application of CAD in footwear design are studied: the development of last surface; computer-aided design of planar shoe-template. A new quasi-experiential development algorithm of last surface based on triangulation approximation is presented. This development algorithm consumes less time and does not need any interactive operation for precisely development compared with other development algorithm of last surface. Based on this algorithm, a software, SHOEMAKERTM, which contains computer aided automatic measurement, automatic development of last surface and computer aide design of shoe-template has been developed.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
NASA Astrophysics Data System (ADS)
Kamangir, H.; Momeni, M.; Satari, M.
2017-09-01
This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.
Bousquet, P-J; Caillet, P; Coeuret-Pellicer, M; Goulard, H; Kudjawu, Y C; Le Bihan, C; Lecuyer, A I; Séguret, F
2017-10-01
The development and use of healthcare databases accentuates the need for dedicated tools, including validated selection algorithms of cancer diseased patients. As part of the development of the French National Health Insurance System data network REDSIAM, the tumor taskforce established an inventory of national and internal published algorithms in the field of cancer. This work aims to facilitate the choice of a best-suited algorithm. A non-systematic literature search was conducted for various cancers. Results are presented for lung, breast, colon, and rectum. Medline, Scopus, the French Database in Public Health, Google Scholar, and the summaries of the main French journals in oncology and public health were searched for publications until August 2016. An extraction grid adapted to oncology was constructed and used for the extraction process. A total of 18 publications were selected for lung cancer, 18 for breast cancer, and 12 for colorectal cancer. Validation studies of algorithms are scarce. When information is available, the performance and choice of an algorithm are dependent on the context, purpose, and location of the planned study. Accounting for cancer disease specificity, the proposed extraction chart is more detailed than the generic chart developed for other REDSIAM taskforces, but remains easily usable in practice. This study illustrates the complexity of cancer detection through sole reliance on healthcare databases and the lack of validated algorithms specifically designed for this purpose. Studies that standardize and facilitate validation of these algorithms should be developed and promoted. Copyright © 2017. Published by Elsevier Masson SAS.
NASA Astrophysics Data System (ADS)
Elmore, K. L.
2016-12-01
The Metorological Phenomemna Identification NeartheGround (mPING) project is an example of a crowd-sourced, citizen science effort to gather data of sufficeint quality and quantity needed by new post processing methods that use machine learning. Transportation and infrastructure are particularly sensitive to precipitation type in winter weather. We extract attributes from operational numerical forecast models and use them in a random forest to generate forecast winter precipitation types. We find that random forests applied to forecast soundings are effective at generating skillful forecasts of surface ptype with consideralbly more skill than the current algorithms, especuially for ice pellets and freezing rain. We also find that three very different forecast models yuield similar overall results, showing that random forests are able to extract essentially equivalent information from different forecast models. We also show that the random forest for each model, and each profile type is unique to the particular forecast model and that the random forests developed using a particular model suffer significant degradation when given attributes derived from a different model. This implies that no single algorithm can perform well across all forecast models. Clearly, random forests extract information unavailable to "physically based" methods because the physical information in the models does not appear as we expect. One intersting result is that results from the classic "warm nose" sounding profile are, by far, the most sensitive to the particular forecast model, but this profile is also the one for which random forests are most skillful. Finally, a method for calibrarting probabilties for each different ptype using multinomial logistic regression is shown.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad
2016-05-17
A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern that can be approximated with a simple sinusoidal function. This algorithm has potential applications in diagnostic CT imaging and radiotherapy in terms of motion management.
Segmentation of financial seals and its implementation on a DSP-based system
NASA Astrophysics Data System (ADS)
He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao
2009-11-01
Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.
Analysis on Difference of Forest Phenology Extracted from EVI and LAI Based on PhenoCams
NASA Astrophysics Data System (ADS)
Wang, C.; Jing, L.; Qinhuo, L.
2017-12-01
Land surface phenology can make up for the deficiency of field observation with advantages of capturing the continuous expression of phenology on a large scale. However, there are some variability in phenological metrics derived from different satellite time-series data of vegetation parameters. This paper aims at assessing the difference of phenology information extracted from EVI and LAI time series. To achieve this, some web-camera sites were selected to analyze the characteristics between MODIS-EVI and MODIS-LAI time series from 2010 to 2014 for different forest types, including evergreen coniferous forest, evergreen broadleaf forest, deciduous coniferous forest and deciduous broadleaf forest. At the same time, satellite-based phenological metrics were extracted by the Logistics algorithm and compared with camera-based phenological metrics. Results show that the SOS and EOS that are extracted from LAI are close to bud burst and leaf defoliation respectively, while the SOS and EOS that are extracted from EVI is close to leaf unfolding and leaf coloring respectively. Thus the SOS that is extracted from LAI is earlier than that from EVI, while the EOS that is extracted from LAI is later than that from EVI at deciduous forest sites. Although the seasonal variation characteristics of evergreen forests are not apparent, significant discrepancies exist in LAI time series and EVI time series. In addition, Satellite- and camera-based phenological metrics agree well generally, but EVI has higher correlation with the camera-based canopy greenness (green chromatic coordinate, gcc) than LAI.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-01-01
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu’s algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape. PMID:28481260
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
Blind column selection protocol for two-dimensional high performance liquid chromatography.
Burns, Niki K; Andrighetto, Luke M; Conlan, Xavier A; Purcell, Stuart D; Barnett, Neil W; Denning, Jacquie; Francis, Paul S; Stevenson, Paul G
2016-07-01
The selection of two orthogonal columns for two-dimensional high performance liquid chromatography (LC×LC) separation of natural product extracts can be a labour intensive and time consuming process and in many cases is an entirely trial-and-error approach. This paper introduces a blind optimisation method for column selection of a black box of constituent components. A data processing pipeline, created in the open source application OpenMS®, was developed to map the components within the mixture of equal mass across a library of HPLC columns; LC×LC separation space utilisation was compared by measuring the fractional surface coverage, fcoverage. It was found that for a test mixture from an opium poppy (Papaver somniferum) extract, the combination of diphenyl and C18 stationary phases provided a predicted fcoverage of 0.48 and was matched with an actual usage of 0.43. OpenMS®, in conjunction with algorithms designed in house, have allowed for a significantly quicker selection of two orthogonal columns, which have been optimised for a LC×LC separation of crude extractions of plant material. Copyright © 2016 Elsevier B.V. All rights reserved.
Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie
2011-01-01
Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times. PMID:21639571
Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie
2011-05-01
Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times.
A digital system for surface reconstruction
Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.
1996-01-01
A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2006-01-01
Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.
Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul
2008-12-01
An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.
Atom based grain extraction and measurement of geometric properties
NASA Astrophysics Data System (ADS)
Martine La Boissonière, Gabriel; Choksi, Rustum
2018-04-01
We introduce an accurate, self-contained and automatic atom based numerical algorithm to characterize grain distributions in two dimensional Phase Field Crystal (PFC) simulations. We compare the method with hand segmented and known test grain distributions to show that the algorithm is able to extract grains and measure their area, perimeter and other geometric properties with high accuracy. Four input parameters must be set by the user and their influence on the results is described. The method is currently tuned to extract data from PFC simulations in the hexagonal lattice regime but the framework may be extended to more general problems.
Learning Behavior Characterization with Multi-Feature, Hierarchical Activity Sequences
ERIC Educational Resources Information Center
Ye, Cheng; Segedy, James R.; Kinnebrew, John S.; Biswas, Gautam
2015-01-01
This paper discusses Multi-Feature Hierarchical Sequential Pattern Mining, MFH-SPAM, a novel algorithm that efficiently extracts patterns from students' learning activity sequences. This algorithm extends an existing sequential pattern mining algorithm by dynamically selecting the level of specificity for hierarchically-defined features…
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei
2018-02-01
Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.
Automatic facial animation parameters extraction in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yang, Chenggen; Gong, Wanwei; Yu, Lu
2002-01-01
Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Zhu, Bohui; Ding, Yongsheng; Hao, Kuangrong
2013-01-01
This paper presents a novel maximum margin clustering method with immune evolution (IEMMC) for automatic diagnosis of electrocardiogram (ECG) arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias. PMID:23690875
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
NASA Astrophysics Data System (ADS)
Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang
2016-09-01
Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection
Human (Homo sapiens) facial attractiveness in relation to skin texture and color.
Fink, B; Grammer, K; Thornhill, R
2001-03-01
The notion that surface texture may provide important information about the geometry of visible surfaces has attracted considerable attention for a long time. The present study shows that skin texture plays a significant role in the judgment of female facial beauty. Following research in clinical dermatology, the authors developed a computer program that implemented an algorithm based on co-occurrence matrices for the analysis of facial skin texture. Homogeneity and contrast features as well as color parameters were extracted out of stimulus faces. Attractiveness ratings of the images made by male participants relate positively to parameters of skin homogeneity. The authors propose that skin texture is a cue to fertility and health. In contrast to some previous studies, the authors found that dark skin, not light skin, was rated as most attractive.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.
Lin, Fan; Xiao, Bin
2017-01-01
Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment. PMID:29088228
Luo, Junhai; Fu, Liang
2017-06-09
With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
Hong, Zhiling; Lin, Fan; Xiao, Bin
2017-01-01
Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment.
Forensic surface metrology: tool mark evidence.
Gambino, Carol; McLaughlin, Patrick; Kuo, Loretta; Kammerman, Frani; Shenkin, Peter; Diaczuk, Peter; Petraco, Nicholas; Hamby, James; Petraco, Nicholas D K
2011-01-01
Over the last several decades, forensic examiners of impression evidence have come under scrutiny in the courtroom due to analysis methods that rely heavily on subjective morphological comparisons. Currently, there is no universally accepted system that generates numerical data to independently corroborate visual comparisons. Our research attempts to develop such a system for tool mark evidence, proposing a methodology that objectively evaluates the association of striated tool marks with the tools that generated them. In our study, 58 primer shear marks on 9 mm cartridge cases, fired from four Glock model 19 pistols, were collected using high-resolution white light confocal microscopy. The resulting three-dimensional surface topographies were filtered to extract all "waviness surfaces"-the essential "line" information that firearm and tool mark examiners view under a microscope. Extracted waviness profiles were processed with principal component analysis (PCA) for dimension reduction. Support vector machines (SVM) were used to make the profile-gun associations, and conformal prediction theory (CPT) for establishing confidence levels. At the 95% confidence level, CPT coupled with PCA-SVM yielded an empirical error rate of 3.5%. Complementary, bootstrap-based computations for estimated error rates were 0%, indicating that the error rate for the algorithmic procedure is likely to remain low on larger data sets. Finally, suggestions are made for practical courtroom application of CPT for assigning levels of confidence to SVM identifications of tool marks recorded with confocal microscopy. Copyright © 2011 Wiley Periodicals, Inc.
4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images
NASA Technical Reports Server (NTRS)
Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.;
2011-01-01
Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.
Techniques of Acceleration for Association Rule Induction with Pseudo Artificial Life Algorithm
NASA Astrophysics Data System (ADS)
Kanakubo, Masaaki; Hagiwara, Masafumi
Frequent patterns mining is one of the important problems in data mining. Generally, the number of potential rules grows rapidly as the size of database increases. It is therefore hard for a user to extract the association rules. To avoid such a difficulty, we propose a new method for association rule induction with pseudo artificial life approach. The proposed method is to decide whether there exists an item set which contains N or more items in two transactions. If it exists, a series of item sets which are contained in the part of transactions will be recorded. The iteration of this step contributes to the extraction of association rules. It is not necessary to calculate the huge number of candidate rules. In the evaluation test, we compared the extracted association rules using our method with the rules using other algorithms like Apriori algorithm. As a result of the evaluation using huge retail market basket data, our method is approximately 10 and 20 times faster than the Apriori algorithm and many its variants.
A Survey on the Feasibility of Sound Classification on Wireless Sensor Nodes
Salomons, Etto L.; Havinga, Paul J. M.
2015-01-01
Wireless sensor networks are suitable to gain context awareness for indoor environments. As sound waves form a rich source of context information, equipping the nodes with microphones can be of great benefit. The algorithms to extract features from sound waves are often highly computationally intensive. This can be problematic as wireless nodes are usually restricted in resources. In order to be able to make a proper decision about which features to use, we survey how sound is used in the literature for global sound classification, age and gender classification, emotion recognition, person verification and identification and indoor and outdoor environmental sound classification. The results of the surveyed algorithms are compared with respect to accuracy and computational load. The accuracies are taken from the surveyed papers; the computational loads are determined by benchmarking the algorithms on an actual sensor node. We conclude that for indoor context awareness, the low-cost algorithms for feature extraction perform equally well as the more computationally-intensive variants. As the feature extraction still requires a large amount of processing time, we present four possible strategies to deal with this problem. PMID:25822142
NASA Technical Reports Server (NTRS)
Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James
1991-01-01
Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
Randomized Hough transform filter for echo extraction in DLR
NASA Astrophysics Data System (ADS)
Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You
2016-11-01
The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.
NASA Technical Reports Server (NTRS)
Gao, Feng; DeColstoun, Eric Brown; Ma, Ronghua; Weng, Qihao; Masek, Jeffrey G.; Chen, Jin; Pan, Yaozhong; Song, Conghe
2012-01-01
Cities have been expanding rapidly worldwide, especially over the past few decades. Mapping the dynamic expansion of impervious surface in both space and time is essential for an improved understanding of the urbanization process, land-cover and land-use change, and their impacts on the environment. Landsat and other medium-resolution satellites provide the necessary spatial details and temporal frequency for mapping impervious surface expansion over the past four decades. Since the US Geological Survey opened the historical record of the Landsat image archive for free access in 2008, the decades-old bottleneck of data limitation has gone. Remote-sensing scientists are now rich with data, and the challenge is how to make best use of this precious resource. In this article, we develop an efficient algorithm to map the continuous expansion of impervious surface using a time series of four decades of medium-resolution satellite images. The algorithm is based on a supervised classification of the time-series image stack using a decision tree. Each imerpervious class represents urbanization starting in a different image. The algorithm also allows us to remove inconsistent training samples because impervious expansion is not reversible during the study period. The objective is to extract a time series of complete and consistent impervious surface maps from a corresponding times series of images collected from multiple sensors, and with a minimal amount of image preprocessing effort. The approach was tested in the lower Yangtze River Delta region, one of the fastest urban growth areas in China. Results from nearly four decades of medium-resolution satellite data from the Landsat Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper plus (ETM+) and China-Brazil Earth Resources Satellite (CBERS) show a consistent urbanization process that is consistent with economic development plans and policies. The time-series impervious spatial extent maps derived from this study agree well with an existing urban extent polygon data set that was previously developed independently. The overall mapping accuracy was estimated at about 92.5% with 3% commission error and 12% omission error for the impervious type from all images regardless of image quality and initial spatial resolution.
Phase shifting diffraction interferometer
Sommargren, Gary E.
1996-01-01
An interferometer which has the capability of measuring optical elements and systems with an accuracy of .lambda./1000 where .lambda. is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about .lambda./50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms.
Phase shifting diffraction interferometer
Sommargren, G.E.
1996-08-29
An interferometer which has the capability of measuring optical elements and systems with an accuracy of {lambda}/1000 where {lambda} is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about {lambda}/50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms. 8 figs.
NASA Technical Reports Server (NTRS)
Liu, Zhong; Heo, Gil
2015-01-01
Data quality (DQ) has many attributes or facets (i.e., errors, biases, systematic differences, uncertainties, benchmark, false trends, false alarm ratio, etc.)Sources can be complicated (measurements, environmental conditions, surface types, algorithms, etc.) and difficult to be identified especially for multi-sensor and multi-satellite products with bias correction (TMPA, IMERG, etc.) How to obtain DQ info fast and easily, especially quantified info in ROI Existing parameters (random error), literature, DIY, etc.How to apply the knowledge in research and applications.Here, we focus on online systems for integration of products and parameters, visualization and analysis as well as investigation and extraction of DQ information.
Automated detection of new impact sites on Martian surface from HiRISE images
NASA Astrophysics Data System (ADS)
Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu
2017-10-01
In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.
NASA Astrophysics Data System (ADS)
Wright, N.; Polashenski, C. M.
2017-12-01
Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces exert tremendous influence over the energy balance of the Arctic Ocean by controlling the absorption of solar radiation. Here we demonstrate the use of a newly released, open source, image classification algorithm designed to identify surface features in high resolution optical satellite imagery of sea ice. Through explicitly resolving individual features on the surface, the algorithm can determine the percentage of ice that is covered by melt ponds with a high degree of certainty. We then compare observations of melt pond fraction extracted from these images with an established method of estimating melt pond fraction from medium resolution satellite images (e.g. MODIS). Because high resolution satellite imagery does not provide the spatial footprint needed to examine the entire Arctic basin, we propose a method of synthesizing both high and medium resolution satellite imagery for an improved determination of melt pond fraction across whole Arctic. We assess the historical trends of melt pond fraction in the Arctic ocean, and address the question: Is pond coverage changing in response to changing ice conditions? Furthermore, we explore the image area that must be observed in order to get a locally representative sample (i.e. the aggregate scale), and show that it is possible to determine accurate estimates of melt pond fraction by observing sample areas significantly smaller than the typical footprint of high-resolution satellite imagery.
Hou, Ying-Yu; He, Yan-Bo; Wang, Jian-Lin; Tian, Guo-Liang
2009-10-01
Based on the time series 10-day composite NOAA Pathfinder AVHRR Land (PAL) dataset (8 km x 8 km), and by using land surface energy balance equation and "VI-Ts" (vegetation index-land surface temperature) method, a new algorithm of land surface evapotranspiration (ET) was constructed. This new algorithm did not need the support from meteorological observation data, and all of its parameters and variables were directly inversed or derived from remote sensing data. A widely accepted ET model of remote sensing, i. e., SEBS model, was chosen to validate the new algorithm. The validation test showed that both the ET and its seasonal variation trend estimated by SEBS model and our new algorithm accorded well, suggesting that the ET estimated from the new algorithm was reliable, being able to reflect the actual land surface ET. The new ET algorithm of remote sensing was practical and operational, which offered a new approach to study the spatiotemporal variation of ET in continental scale and global scale based on the long-term time series satellite remote sensing images.
Stereo reconstruction from multiperspective panoramas.
Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard
2004-01-01
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Fluctuating snow line altitudes in the Hunza basin (Karakoram) using Landsat OLI imagery
NASA Astrophysics Data System (ADS)
Racoviteanu, Adina; Rittger, Karl; Brodzik, Mary J.; Painter, Thomas H.; Armstrong, Richard
2016-04-01
Snowline altitudes (SLAs) on glacier surfaces are needed for separating snow and ice as input for melt models. When measured at the end of the ablation season, SLAs are used for inferring stable-state glacier equilibrium line altitudes (ELAs). Direct measurements of snowlines are rarely possible particularly in remote, high altitude glacierized terrain, but remote sensing data can be used to separate these snow and ice surfaces. Snow lines are commonly visible on optical satellite images acquired at the end of the ablation season if the images are contrasted enough, and are manually digitized on screen using various satellite band combinations for visual interpretation, which is a time-consuming, subjective process. Here we use Landsat OLI imagery at 30 m resolution to estimate glacier SLAs for a subset of the Hunza basin in the Upper Indus in the Karakoram. Clean glacier ice surfaces are delineated using a standardized semi-automated band ratio algorithm with image segmentation. Within the glacier surface, snow and ice are separated using supervised classification schemes based on regions of interest, and glacier SLAs are extracted on the basis of these areas. SLAs are compared with estimates from a new automated method that relies on fractional snow covered area rather than on band ratio algorithms for delineating clean glacier ice surfaces, and on grain size (instead of supervised classification) for separating snow from glacier ice on the glacier surface. The two methods produce comparable snow/ice outputs. The fSCA-derived glacierized areas are slightly larger than the band ratio estimates. Some of the additional area is the result of better detection in shadows from spectral mixture analysis (true positive) while the rest is shallow water, which is spectrally similar to snow/ice (false positive). On the glacier surface, a thresholding the snow grain size image (grain size > 500μm) results in similar glacier ice areas derived from the supervised classification, but there is noise (snow) on edges of dirty ice/ moraines at the glacier termini and around rock outcrops on the glacier surface. Neither of the two methods distinguishes the debris-covered ice, so these were mapped separately using a combination of topographic indices (slope, terrain curvature), along with remote sensing surface temperature and texture data. Using average elevation of snow and ice areas, we calculate an ELA of 5260 m for 2013. We construct yearly time series of the ELAs around the centerlines of selected glaciers in the Hunza for the period 2000 - 2014 using Landsat imagery. We explore spatial trends in glacier ELAs within the region, as well as relationships between ELA and topographic characteristics extracted on a glacier-by-glacier basis from a digital elevation model.
Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick
2017-04-01
Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data
NASA Astrophysics Data System (ADS)
Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.
2014-11-01
This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.
NASA Astrophysics Data System (ADS)
Ren, Zhi Ying.; Gao, ChengHui.; Han, GuoQiang.; Ding, Shen; Lin, JianXing.
2014-04-01
Dual tree complex wavelet transform (DT-CWT) exhibits superiority of shift invariance, directional selectivity, perfect reconstruction (PR), and limited redundancy and can effectively separate various surface components. However, in nano scale the morphology contains pits and convexities and is more complex to characterize. This paper presents an improved approach which can simultaneously separate reference and waviness and allows an image to remain robust against abnormal signals. We included a bilateral filtering (BF) stage in DT-CWT to solve imaging problems. In order to verify the feasibility of the new method and to test its performance we used a computer simulation based on three generations of Wavelet and Improved DT-CWT and we conducted two case studies. Our results show that the improved DT-CWT not only enhances the robustness filtering under the conditions of abnormal interference, but also possesses accuracy and reliability of the reference and waviness from the 3-D nano scalar surfaces.
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Gridnev, Sergei; Windhorst, Robert D.
2015-01-01
This User Guide describes SOSS (Surface Operations Simulator and Scheduler) software build and graphic user interface. SOSS is a desktop application that simulates airport surface operations in fast time using traffic management algorithms. It moves aircraft on the airport surface based on information provided by scheduling algorithm prototypes, monitors separation violation and scheduling conformance, and produces scheduling algorithm performance data.
Lining seam elimination algorithm and surface crack detection in concrete tunnel lining
NASA Astrophysics Data System (ADS)
Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling
2016-11-01
Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.
Method for Location of An External Dump in Surface Mining Using the A-Star Algorithm
NASA Astrophysics Data System (ADS)
Zajączkowski, Maciej; Kasztelewicz, Zbigniew; Sikora, Mateusz
2014-10-01
The construction of a surface mine always involves the necessity of accessing deposits through the removal of the residual overburden above. In the beginning phase of exploitation, the masses of overburden are located outside the perimeters of the excavation site, on the external dump, until the moment of internal dumping. In the case of lignite surface mines, these dumps can cover a ground surface of several dozen to a few thousand hectares. This results from a high concentration of lignite extraction, counted in millions of Mg per year, and the relatively large depth of its residual deposits. Determining the best place for the location of an external dump requires a detailed analysis of existing options, followed by a choice of the most favorable one. This article, using the case study of an open-cast lignite mine, presents the selection method for an external dump location based on graph theory and the A-star algorithm. This algorithm, based on the spatial distribution of individual intersections on the graph, seeks specified graph states, continually expanding them with additional elementary fields until the required surface area for the external dump - defined by the lowest value of the occupied site - is achieved. To do this, it is necessary to accurately identify the factors affecting the choice of dump location. On such a basis, it is then possible to specify the target function, which reflects the individual costs of dump construction on a given site. This is discussed further in chapter 3. The area of potential dump location has been divided into elementary fields, each represented by a corresponding geometrical locus. Ascribed to this locus, in addition to its geodesic coordinates, are the appropriate attributes reflecting the degree of development of its elementary field. These tasks can be carried out automatically thanks to the integration of the method with the system of geospatial data management for the given area. The collection of loci, together with geodesic coordinates, constitutes the points on the graph used during exploration. This is done using the A-star algorithm, which uses a heuristic function, allowing it to identify the optimal solution; therefore, the collection of elementary fields, which occupy the potential construction area of a dump, characterized by the lowest value representing the cost of occupation and dumping of overburden in the area. The precision of the boundary, generated by the algorithm, is dependent on the established size of the elementary field, and should be refined each time by the designer of the surface mine. This article presents the application of the above method of dump location using the example of "Tomisławice," a lignite surface mine owned by PAK KWB Konin S. A. The method made it possible to identify the most favorable dump location on the northeast side of the initial pit, within 2 kilometers of its surrounding area (discussed further in chapter 3). This method is universal in nature and, after certain modifications, can be implemented for other surface mines as well.
Scene text recognition in mobile applications by character descriptor and structure configuration.
Yi, Chucai; Tian, Yingli
2014-07-01
Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.
Decorating surfaces with bidirectional texture functions.
Zhou, Kun; Du, Peng; Wang, Lifeng; Matsushita, Yasuyuki; Shi, Jiaoying; Guo, Baining; Shum, Heung-Yeung
2005-01-01
We present a system for decorating arbitrary surfaces with bidirectional texture functions (BTF). Our system generates BTFs in two steps. First, we automatically synthesize a BTF over the target surface from a given BTF sample. Then, we let the user interactively paint BTF patches onto the surface such that the painted patches seamlessly integrate with the background patterns. Our system is based on a patch-based texture synthesis approach known as quilting. We present a graphcut algorithm for BTF synthesis on surfaces and the algorithm works well for a wide variety of BTF samples, including those which present problems for existing algorithms. We also describe a graphcut texture painting algorithm for creating new surface imperfections (e.g., dirt, cracks, scratches) from existing imperfections found in input BTF samples. Using these algorithms, we can decorate surfaces with real-world textures that have spatially-variant reflectance, fine-scale geometry details, and surfaces imperfections. A particularly attractive feature of BTF painting is that it allows us to capture imperfections of real materials and paint them onto geometry models. We demonstrate the effectiveness of our system with examples.
NASA Astrophysics Data System (ADS)
Zhang, Hai; Kondragunta, Shobha; Laszlo, Istvan; Liu, Hongqing; Remer, Lorraine A.; Huang, Jingfeng; Superczynski, Stephen; Ciren, Pubu
2016-09-01
The Visible/Infrared Imager Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite has been retrieving aerosol optical thickness (AOT), operationally and globally, over ocean and land since shortly after S-NPP launch in 2011. However, the current operational VIIRS AOT retrieval algorithm over land has two limitations in its assumptions for land surfaces: (1) it only retrieves AOT over the dark surfaces and (2) it assumes that the global surface reflectance ratios between VIIRS bands are constants. In this work, we develop a surface reflectance ratio database over land with a spatial resolution 0.1° × 0.1° using 2 years of VIIRS top of atmosphere reflectances. We enhance the current operational VIIRS AOT retrieval algorithm by applying the surface reflectance ratio database in the algorithm. The enhanced algorithm is able to retrieve AOT over both dark and bright surfaces. Over bright surfaces, the VIIRS AOT retrievals from the enhanced algorithm have a correlation of 0.79, mean bias of -0.008, and standard deviation (STD) of error of 0.139 when compared against the ground-based observations at the global AERONET (Aerosol Robotic Network) sites. Over dark surfaces, the VIIRS AOT retrievals using the surface reflectance ratio database improve the root-mean-square error from 0.150 to 0.123. The use of the surface reflectance ratio database also increases the data coverage of more than 20% over dark surfaces. The AOT retrievals over bright surfaces are comparable to MODIS Deep Blue AOT retrievals.
Finding topological center of a geographic space via road network
NASA Astrophysics Data System (ADS)
Gao, Liang; Miao, Yanan; Qin, Yuhao; Zhao, Xiaomei; Gao, Zi-You
2015-02-01
Previous studies show that the center of a geographic space is of great importance in urban and regional studies, including study of population distribution, urban growth modeling, and scaling properties of urban systems, etc. But how to well define and how to efficiently extract the center of a geographic space are still largely unknown. Recently, Jiang et al. have presented a definition of topological center by their block detection (BD) algorithm. Despite the fact that they first introduced the definition and discovered the 'true center', in human minds, their algorithm left several redundancies in its traversal process. Here, we propose an alternative road-cycle detection (RCD) algorithm to find the topological center, which extracts the outmost road-cycle recursively. To foster the application of the topological center in related research fields, we first reproduce the BD algorithm in Python (pyBD), then implement the RCD algorithm in two ways: the ArcPy implementation (arcRCD) and the Python implementation (pyRCD). After the experiments on twenty-four typical road networks, we find that the results of our RCD algorithm are consistent with those of Jiang's BD algorithm. We also find that the RCD algorithm is at least seven times more efficient than the BD algorithm on all the ten typical road networks.
Preparing near-surface heavy oil for extraction using microbial degradation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busche, Frederick D.; Rollins, John B.; Noyes, Harold J.
In one embodiment, the invention provides a system including at least one computing device for enhancing the recovery of heavy oil in an underground, near-surface crude oil extraction environment by performing a method comprising sampling and identifying microbial species (bacteria and/or fungi) that reside in the underground, near-surface crude oil extraction environment; collecting rock and fluid property data from the underground, near-surface crude oil extraction environment; collecting nutrient data from the underground, near-surface crude oil extraction environment; identifying a preferred microbial species from the underground, near-surface crude oil extraction environment that can transform the heavy oil into a lighter oil;more » identifying a nutrient from the underground, near-surface crude oil extraction environment that promotes a proliferation of the preferred microbial species; and introducing the nutrient into the underground, near-surface crude oil extraction environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases which has potential applications in diagnostic CT imaging and radiotherapy.« less
Scanlon, John M; Sherony, Rini; Gabler, Hampton C
2016-09-01
Intersection crashes resulted in over 5,000 fatalities in the United States in 2014. Intersection Advanced Driver Assistance Systems (I-ADAS) are active safety systems that seek to help drivers safely traverse intersections. I-ADAS uses onboard sensors to detect oncoming vehicles and, in the event of an imminent crash, can either alert the driver or take autonomous evasive action. The objective of this study was to develop and evaluate a predictive model for detecting whether a stop sign violation was imminent. Passenger vehicle intersection approaches were extracted from a data set of typical driver behavior (100-Car Naturalistic Driving Study) and violations (event data recorders downloaded from real-world crashes) and were assigned weighting factors based on real-world frequency. A k-fold cross-validation procedure was then used to develop and evaluate 3 hypothetical stop sign warning algorithms (i.e., early, intermediate, and delayed) for detecting an impending violation during the intersection approach. Violation detection models were developed using logistic regression models that evaluate likelihood of a violation at various locations along the intersection approach. Two potential indicators of driver intent to stop-that is, required deceleration parameter (RDP) and brake application-were used to develop the predictive models. The earliest violation detection opportunity was then evaluated for each detection algorithm in order to (1) evaluate the violation detection accuracy and (2) compare braking demand versus maximum braking capabilities. A total of 38 violating and 658 nonviolating approaches were used in the analysis. All 3 algorithms were able to detect a violation at some point during the intersection approach. The early detection algorithm, as designed, was able to detect violations earlier than all other algorithms during the intersection approach but gave false alarms for 22.3% of approaches. In contrast, the delayed detection algorithm sacrificed some time for detecting violations but was able to substantially reduce false alarms to only 3.3% of all nonviolating approaches. Given good surface conditions (maximum braking capabilities = 0.8 g) and maximum effort, most drivers (55.3-71.1%) would be able to stop the vehicle regardless of the detection algorithm. However, given poor surface conditions (maximum braking capabilities = 0.4 g), few drivers (10.5-26.3%) would be able to stop the vehicle. Automatic emergency braking (AEB) would allow for early braking prior to driver reaction. If equipped with an AEB system, the results suggest that, even for the poor surface conditions scenario, over one half (55.3-65.8%) of the vehicles could have been stopped. This study demonstrates the potential of I-ADAS to incorporate a stop sign violation detection algorithm. Repeating the analysis on a larger, more extensive data set will allow for the development of a more comprehensive algorithm to further validate the findings.
A novel automated spike sorting algorithm with adaptable feature extraction.
Bestel, Robert; Daus, Andreas W; Thielemann, Christiane
2012-10-15
To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
NASA Astrophysics Data System (ADS)
Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi
2016-10-01
The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Morphology-based three-dimensional segmentation of coronary artery tree from CTA scans
NASA Astrophysics Data System (ADS)
Banh, Diem Phuc T.; Kyprianou, Iacovos S.; Paquerault, Sophie; Myers, Kyle J.
2007-03-01
We developed an algorithm based on a rule-based threshold framework to segment the coronary arteries from angiographic computed tomography (CTA) data. Computerized segmentation of the coronary arteries is a challenging procedure due to the presence of diverse anatomical structures surrounding the heart on cardiac CTA data. The proposed algorithm incorporates various levels of image processing and organ information including region, connectivity and morphology operations. It consists of three successive stages. The first stage involves the extraction of the three-dimensional scaffold of the heart envelope. This stage is semiautomatic requiring a reader to review the CTA scans and manually select points along the heart envelope in slices. These points are further processed using a surface spline-fitting technique to automatically generate the heart envelope. The second stage consists of segmenting the left heart chambers and coronary arteries using grayscale threshold, size and connectivity criteria. This is followed by applying morphology operations to further detach the left and right coronary arteries from the aorta. In the final stage, the 3D vessel tree is reconstructed and labeled using an Isolated Connected Threshold technique. The algorithm was developed and tested on a patient coronary artery CTA that was graciously shared by the Department of Radiology of the Massachusetts General Hospital. The test showed that our method constantly segmented the vessels above 79% of the maximum gray-level and automatically extracted 55 of the 58 coronary segments that can be seen on the CTA scan by a reader. These results are an encouraging step toward our objective of generating high resolution models of the male and female heart that will be subsequently used as phantoms for medical imaging system optimization studies.
Synthetic aperture radar target detection, feature extraction, and image formation techniques
NASA Technical Reports Server (NTRS)
Li, Jian
1994-01-01
This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.
Probabilistic #D data fusion for multiresolution surface generation
NASA Technical Reports Server (NTRS)
Manduchi, R.; Johnson, A. E.
2002-01-01
In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.
Light extraction block with curved surface
Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.
2016-03-22
Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.
Behavior Based Social Dimensions Extraction for Multi-Label Classification
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849
An Automatic Cloud Mask Algorithm Based on Time Series of MODIS Measurements
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Wang, Yujie; Frey, R.
2008-01-01
Quality of aerosol retrievals and atmospheric correction depends strongly on accuracy of the cloud mask (CM) algorithm. The heritage CM algorithms developed for AVHRR and MODIS use the latest sensor measurements of spectral reflectance and brightness temperature and perform processing at the pixel level. The algorithms are threshold-based and empirically tuned. They don't explicitly address the classical problem of cloud search, wherein the baseline clear-skies scene is defined for comparison. Here, we report on a new CM algorithm which explicitly builds and maintains a reference clear-skies image of the surface (refcm) using a time series of MODIS measurements. The new algorithm, developed as part of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm for MODIS, relies on fact that clear-skies images of the same surface area have a common textural pattern, defined by the surface topography, boundaries of rivers and lakes, distribution of soils and vegetation etc. This pattern changes slowly given the daily rate of global Earth observations, whereas clouds introduce high-frequency random disturbances. Under clear skies, consecutive gridded images of the same surface area have a high covariance, whereas in presence of clouds covariance is usually low. This idea is central to initialization of refcm which is used to derive cloud mask in combination with spectral and brightness temperature tests. The refcm is continuously updated with the latest clear-skies MODIS measurements, thus adapting to seasonal and rapid surface changes. The algorithm is enhanced by an internal dynamic land-water-snow classification coupled with a surface change mask. An initial comparison shows that the new algorithm offers the potential to perform better than the MODIS MOD35 cloud mask in situations where the land surface is changing rapidly, and over Earth regions covered by snow and ice.
Aorta and pulmonary artery segmentation using optimal surface graph cuts in non-contrast CT
NASA Astrophysics Data System (ADS)
Sedghi Gamechi, Zahra; Arias-Lorza, Andres M.; Pedersen, Jesper Holst; de Bruijne, Marleen
2018-03-01
Accurate measurements of the size and shape of the aorta and pulmonary arteries are important as risk factors for cardiovascular diseases, and for Chronicle Obstacle Pulmonary Disease (COPD).1 The aim of this paper is to propose an automated method for segmenting the aorta and pulmonary arteries in low-dose non-ECGgated non-contrast CT scans. Low contrast and the high noise level make the automatic segmentation in such images a challenging task. In the proposed method, first, a minimum cost path tracking algorithm traces the centerline between user-defined seed points. The cost function is based on a multi-directional medialness filter and a lumen intensity similarity metric. The vessel radius is also estimated from the medialness filter. The extracted centerlines are then smoothed and dilated non-uniformly according to the extracted local vessel radius and subsequently used as initialization for a graph-cut segmentation. The algorithm is evaluated on 225 low-dose non-ECG-gated non-contrast CT scans from a lung cancer screening trial. Quantitatively analyzing 25 scans with full manual annotations, we obtain a dice overlap of 0.94+/-0.01 for the aorta and 0.92+/-0.01 for pulmonary arteries. Qualitative validation by visual inspection on 200 scans shows successful segmentation in 93% of all cases for the aorta and 94% for pulmonary arteries.
Manifold learning in machine vision and robotics
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-02-01
Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.
A Novel Algorithm for Determining Contact Area Between a Respirator and a Headform
Lei, Zhipeng; Yang, James; Zhuang, Ziqing
2016-01-01
The contact area, as well as the contact pressure, is created when a respiratory protection device (a respirator or surgical mask) contacts a human face. A computer-based algorithm for determining the contact area between a headform and N95 filtering facepiece respirator (FFR) was proposed. Six N95 FFRs were applied to five sizes of standard headforms (large, medium, small, long/narrow, and short/wide) to simulate respirator donning. After the contact simulation between a headform and an N95 FFR was conducted, a contact area was determined by extracting the intersection surfaces of the headform and the N95 FFR. Using computer-aided design tools, a superimposed contact area and an average contact area, which are non-uniform rational basis spline (NURBS) surfaces, were developed for each headform. Experiments that directly measured dimensions of the contact areas between headform prototypes and N95 FFRs were used to validate the simulation results. Headform sizes influenced all contact area dimensions (P < 0.0001), and N95 FFR sizing systems influenced all contact area dimensions (P < 0.05) except the left and right chin regions. The medium headform produced the largest contact area, while the large and small headforms produced the smallest. PMID:24579752
Mars Pathfinder Atmospheric Entry Navigation Operations
NASA Technical Reports Server (NTRS)
Braun, R. D.; Spencer, D. A.; Kallemeyn, P. H.; Vaughan, R. M.
1997-01-01
On July 4, 1997, after traveling close to 500 million km, the Pathfinder spacecraft successfully completed entry, descent, and landing, coming to rest on the surface of Mars just 27 km from its target point. In the present paper, the atmospheric entry and approach navigation activities required in support of this mission are discussed. In particular, the flight software parameter update and landing site prediction analyses performed by the Pathfinder operations navigation team are described. A suite of simulation tools developed during Pathfinder's design cycle, but extendible to Pathfinder operations, are also presented. Data regarding the accuracy of the primary parachute deployment algorithm is extracted from the Pathfinder flight data, demonstrating that this algorithm performed as predicted. The increased probability of mission success through the software parameter update process is discussed. This paper also demonstrates the importance of modeling atmospheric flight uncertainties in the estimation of an accurate landing site. With these atmospheric effects included, the final landed ellipse prediction differs from the post-flight determined landing site by less then 0.5 km in downtrack.
Multichannel Doppler Processing for an Experimental Low-Angle Tracking System
1990-05-01
estimation techniques at sea. Because of clutter and noise, it is necessary to use a number of different processing algorithms to extract the required...a number of different processing algorithms to extract the required information. Consequently, the ELAT radar system is composed of multiple...corresponding to RF frequencies, f, and f2. For mode 3, the ambiguities occur at vbi = 15.186 knots and vb2 = 16.96 knots. The sea clutter, with a spectrum
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-10-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-01-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078
Novel method of extracting motion from natural movies.
Suzuki, Wataru; Ichinohe, Noritaka; Tani, Toshiki; Hayami, Taku; Miyakawa, Naohisa; Watanabe, Satoshi; Takeichi, Hiroshige
2017-11-01
The visual system in primates can be segregated into motion and shape pathways. Interaction occurs at multiple stages along these pathways. Processing of shape-from-motion and biological motion is considered to be a higher-order integration process involving motion and shape information. However, relatively limited types of stimuli have been used in previous studies on these integration processes. We propose a new algorithm to extract object motion information from natural movies and to move random dots in accordance with the information. The object motion information is extracted by estimating the dynamics of local normal vectors of the image intensity projected onto the x-y plane of the movie. An electrophysiological experiment on two adult common marmoset monkeys (Callithrix jacchus) showed that the natural and random dot movies generated with this new algorithm yielded comparable neural responses in the middle temporal visual area. In principle, this algorithm provided random dot motion stimuli containing shape information for arbitrary natural movies. This new method is expected to expand the neurophysiological and psychophysical experimental protocols to elucidate the integration processing of motion and shape information in biological systems. The novel algorithm proposed here was effective in extracting object motion information from natural movies and provided new motion stimuli to investigate higher-order motion information processing. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sang, Yuanrui; Karayaka, H. Bora; Yan, Yanjun
The slider crank is a proven mechanical linkage system with a long history of successful applications, and the slider-crank ocean wave energy converter (WEC) is a type of WEC that converts linear motion into rotation. This paper presents a control algorithm for a slider-crank WEC. In this study, a time-domain hydrodynamic analysis is adopted, and an AC synchronous machine is used in the power take-off system to achieve relatively high system performance. Also, a rule-based phase control strategy is applied to maximize energy extraction, making the system suitable for not only regular sinusoidal waves but also irregular waves. Simulations aremore » carried out under regular sinusoidal wave and synthetically produced irregular wave conditions; performance validations are also presented with high-precision, real ocean wave surface elevation data. The influences of significant wave height, and peak period upon energy extraction of the system are studied. Energy extraction results using the proposed method are compared to those of the passive loading and complex conjugate control strategies; results show that the level of energy extraction is between those of the passive loading and complex conjugate control strategies, and the suboptimal nature of this control strategy is verified.« less
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Yang, Yanxi
2018-05-01
We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.
Antimicrobial thin films based on ayurvedic plants extracts embedded in a bioactive glass matrix
NASA Astrophysics Data System (ADS)
Floroian, L.; Ristoscu, C.; Candiani, G.; Pastori, N.; Moscatelli, M.; Mihailescu, N.; Negut, I.; Badea, M.; Gilca, M.; Chiesa, R.; Mihailescu, I. N.
2017-09-01
Ayurvedic medicine is one of the oldest medical systems. It is an example of a coherent traditional system which has a time-tested and precise algorithm for medicinal plant selection, based on several ethnopharmacophore descriptors which knowledge endows the user to adequately choose the optimal plant for the treatment of certain pathology. This work aims for linking traditional knowledge with biomedical science by using traditional ayurvedic plants extracts with antimicrobial effect in form of thin films for implant protection. We report on the transfer of novel composites from bioactive glass mixed with antimicrobial plants extracts and polymer by matrix-assisted pulsed laser evaporation into uniform thin layers onto stainless steel implant-like surfaces. The comprehensive characterization of the deposited films was performed by complementary analyses: Fourier transformed infrared spectroscopy, glow discharge optical emission spectroscopy, scanning electron microscopy, atomic force microscopy, electrochemical impedance spectroscopy, UV-VIS absorption spectroscopy and antimicrobial tests. The results emphasize upon the multifunctionality of these coatings which allow to halt the leakage of metal and metal oxides into the biological fluids and eventually to inner organs (by polymer use), to speed up the osseointegration (due to the bioactive glass use), to exert antimicrobial effects (by ayurvedic plants extracts use) and to decrease the implant price (by cheaper stainless steel use).
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela M.; Bianchi, Andrea G. C.; DeBianchi, Christina
We introduce a computational analysis workflow to access properties of solid objects using nondestructive imaging techniques that rely on X-ray imaging. The goal is to process and quantify structures from material science sample cross sections. The algorithms can differentiate the porous media (high density material) from the void (background, low density media) using a Boolean classifier, so that we can extract features, such as volume, surface area, granularity spectrum, porosity, among others. Our workflow, Quant-CT, leverages several algorithms from ImageJ, such as statistical region merging and 3D object counter. It also includes schemes for bilateral filtering that use a 3Dmore » kernel, for parallel processing of sub-stacks, and for handling over-segmentation using histogram similarities. The Quant-CT supports fast user interaction, providing the ability for the user to train the algorithm via subsamples to feed its core algorithms with automated parameterization. Quant-CT plugin is currently available for testing by personnel at the Advanced Light Source and Earth Sciences Divisions and Energy Frontier Research Center (EFRC), LBNL, as part of their research on porous materials. The goal is to understand the processes in fluid-rock systems for the geologic sequestration of CO2, and to develop technology for the safe storage of CO2 in deep subsurface rock formations. We describe our implementation, and demonstrate our plugin on porous material images. This paper targets end-users, with relevant information for developers to extend its current capabilities.« less
Zikmund, T; Kvasnica, L; Týč, M; Křížová, A; Colláková, J; Chmelík, R
2014-11-01
Transmitted light holographic microscopy is particularly used for quantitative phase imaging of transparent microscopic objects such as living cells. The study of the cell is based on extraction of the dynamic data on cell behaviour from the time-lapse sequence of the phase images. However, the phase images are affected by the phase aberrations that make the analysis particularly difficult. This is because the phase deformation is prone to change during long-term experiments. Here, we present a novel algorithm for sequential processing of living cells phase images in a time-lapse sequence. The algorithm compensates for the deformation of a phase image using weighted least-squares surface fitting. Moreover, it identifies and segments the individual cells in the phase image. All these procedures are performed automatically and applied immediately after obtaining every single phase image. This property of the algorithm is important for real-time cell quantitative phase imaging and instantaneous control of the course of the experiment by playback of the recorded sequence up to actual time. Such operator's intervention is a forerunner of process automation derived from image analysis. The efficiency of the propounded algorithm is demonstrated on images of rat fibrosarcoma cells using an off-axis holographic microscope. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Novel Semi-Parametric Algorithm for Interference-Immune Tunable Absorption Spectroscopy Gas Sensing
Michelucci, Umberto; Venturini, Francesca
2017-01-01
One of the most common limits to gas sensor performance is the presence of unwanted interference fringes arising, for example, from multiple reflections between surfaces in the optical path. Additionally, since the amplitude and the frequency of these interferences depend on the distance and alignment of the optical elements, they are affected by temperature changes and mechanical disturbances, giving rise to a drift of the signal. In this work, we present a novel semi-parametric algorithm that allows the extraction of a signal, like the spectroscopic absorption line of a gas molecule, from a background containing arbitrary disturbances, without having to make any assumption on the functional form of these disturbances. The algorithm is applied first to simulated data and then to oxygen absorption measurements in the presence of strong fringes.To the best of the authors’ knowledge, the algorithm enables an unprecedented accuracy particularly if the fringes have a free spectral range and amplitude comparable to those of the signal to be detected. The described method presents the advantage of being based purely on post processing, and to be of extremely straightforward implementation if the functional form of the Fourier transform of the signal is known. Therefore, it has the potential to enable interference-immune absorption spectroscopy. Finally, its relevance goes beyond absorption spectroscopy for gas sensing, since it can be applied to any kind of spectroscopic data. PMID:28991161
12 CFR Appendix F to Part 360 - Customer File Structure
Code of Federal Regulations, 2013 CFR
2013-01-01
... extract. The files will be encrypted using an FDIC-supplied algorithm. The FDIC will transmit the encryption algorithm over FDICconnect. Note: Each record must contain the customer's name and permanent legal...
12 CFR Appendix F to Part 360 - Customer File Structure
Code of Federal Regulations, 2012 CFR
2012-01-01
... extract. The files will be encrypted using an FDIC-supplied algorithm. The FDIC will transmit the encryption algorithm over FDICconnect. Note: Each record must contain the customer's name and permanent legal...
12 CFR Appendix F to Part 360 - Customer File Structure
Code of Federal Regulations, 2011 CFR
2011-01-01
... extract. The files will be encrypted using an FDIC-supplied algorithm. The FDIC will transmit the encryption algorithm over FDICconnect. Note: Each record must contain the customer's name and permanent legal...
12 CFR Appendix F to Part 360 - Customer File Structure
Code of Federal Regulations, 2014 CFR
2014-01-01
... extract. The files will be encrypted using an FDIC-supplied algorithm. The FDIC will transmit the encryption algorithm over FDICconnect. Note: Each record must contain the customer's name and permanent legal...
12 CFR Appendix F to Part 360 - Customer File Structure
Code of Federal Regulations, 2010 CFR
2010-01-01
... extract. The files will be encrypted using an FDIC-supplied algorithm. The FDIC will transmit the encryption algorithm over FDICconnect. Note: Each record must contain the customer's name and permanent legal...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
NASA Astrophysics Data System (ADS)
Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael
2018-04-01
Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.
Detecting the red tide based on remote sensing data in optically complex East China Sea
NASA Astrophysics Data System (ADS)
Xu, Xiaohui; Pan, Delu; Mao, Zhihua; Tao, Bangyi; Liu, Qiong
2012-09-01
Red tide not only destroys marine fishery production, deteriorates the marine environment, affects coastal tourist industry, but also causes human poison, even death by eating toxic seafood contaminated by red tide organisms. Remote sensing technology has the characteristics of large-scale, synchronized, rapid monitoring, so it is one of the most important and most effective means of red tide monitoring. This paper selects the high frequency red tides areas of the East China Sea as study area, MODIS/Aqua L2 data as the data source, analysis and compares the spectral differences in the red tide water bodies and non-red tide water bodies of many historical events. Based on the spectral differences, this paper develops the algorithm of Rrs555/Rrs488> 1.5 to extract the red tide information. Apply the algorithm on red tide event happened in the East China Sea on May 28, 2009 to extract the information of red tide, and found that the method can determine effectively the location of the occurrence of red tide; there is a good corresponding relationship between red tide extraction result and chlorophyll a concentration extracted by remote sensing, shows that these algorithm can determine effectively the location and extract the red tide information.
Edge detection of optical subaperture image based on improved differential box-counting method
NASA Astrophysics Data System (ADS)
Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin
2018-01-01
Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.
An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning
Yang, Geng; Pang, Gaoyang; Zhang, Hao; Li, Jiayi; Deng, Bin; Pang, Zhibo; Xu, Juan; Jiang, Mingzhe; Liljeberg, Pasi; Xie, Haibo; Yang, Huayong
2018-01-01
Surface electromyography signal plays an important role in hand function recovery training. In this paper, an IoT-enabled stroke rehabilitation system was introduced which was based on a smart wearable armband (SWA), machine learning (ML) algorithms, and a 3-D printed dexterous robot hand. User comfort is one of the key issues which should be addressed for wearable devices. The SWA was developed by integrating a low-power and tiny-sized IoT sensing device with textile electrodes, which can measure, pre-process, and wirelessly transmit bio-potential signals. By evenly distributing surface electrodes over user’s forearm, drawbacks of classification accuracy poor performance can be mitigated. A new method was put forward to find the optimal feature set. ML algorithms were leveraged to analyze and discriminate features of different hand movements, and their performances were appraised by classification complexity estimating algorithms and principal components analysis. According to the verification results, all nine gestures can be successfully identified with an average accuracy up to 96.20%. In addition, a 3-D printed five-finger robot hand was implemented for hand rehabilitation training purpose. Correspondingly, user’s hand movement intentions were extracted and converted into a series of commands which were used to drive motors assembled inside the dexterous robot hand. As a result, the dexterous robot hand can mimic the user’s gesture in a real-time manner, which shows the proposed system can be used as a training tool to facilitate rehabilitation process for the patients after stroke. PMID:29805919
Fongaro, Lorenzo; Ho, Doris Mer Lin; Kvaal, Knut; Mayer, Klaus; Rondinella, Vincenzo V
2016-05-15
The identification of interdicted nuclear or radioactive materials requires the application of dedicated techniques. In this work, a new approach for characterizing powder of uranium ore concentrates (UOCs) is presented. It is based on image texture analysis and multivariate data modelling. 26 different UOCs samples were evaluated applying the Angle Measure Technique (AMT) algorithm to extract textural features on samples images acquired at 250× and 1000× magnification by Scanning Electron Microscope (SEM). At both magnifications, this method proved effective to classify the different types of UOC powder based on the surface characteristics that depend on particle size, homogeneity, and graininess and are related to the composition and processes used in the production facilities. Using the outcome data from the application of the AMT algorithm, the total explained variance was higher than 90% with Principal Component Analysis (PCA), while partial least square discriminant analysis (PLS-DA) applied only on the 14 black colour UOCs powder samples, allowed their classification only on the basis of their surface texture features (sensitivity>0.6; specificity>0.6). This preliminary study shows that this method was able to distinguish samples with similar composition, but obtained from different facilities. The mean angle spectral data obtained by the image texture analysis using the AMT algorithm can be considered as a specific fingerprint or signature of UOCs and could be used for nuclear forensic investigation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Gauge invariant spectral Cauchy characteristic extraction
NASA Astrophysics Data System (ADS)
Handmer, Casey J.; Szilágyi, Béla; Winicour, Jeffrey
2015-12-01
We present gauge invariant spectral Cauchy characteristic extraction. We compare gravitational waveforms extracted from a head-on black hole merger simulated in two different gauges by two different codes. We show rapid convergence, demonstrating both gauge invariance of the extraction algorithm and consistency between the legacy Pitt null code and the much faster spectral Einstein code (SpEC).
Hardjojo, Antony; Gunachandran, Arunan; Pang, Long; Abdullah, Mohammed Ridzwan Bin; Wah, Win; Chong, Joash Wen Chen; Goh, Ee Hui; Teo, Sok Huang; Lim, Gilbert; Lee, Mong Li; Hsu, Wynne; Lee, Vernon; Chen, Mark I-Cheng; Wong, Franco; Phang, Jonathan Siung King
2018-06-11
Free-text clinical records provide a source of information that complements traditional disease surveillance. To electronically harness these records, they need to be transformed into codified fields by natural language processing algorithms. The aim of this study was to develop, train, and validate Clinical History Extractor for Syndromic Surveillance (CHESS), an natural language processing algorithm to extract clinical information from free-text primary care records. CHESS is a keyword-based natural language processing algorithm to extract 48 signs and symptoms suggesting respiratory infections, gastrointestinal infections, constitutional, as well as other signs and symptoms potentially associated with infectious diseases. The algorithm also captured the assertion status (affirmed, negated, or suspected) and symptom duration. Electronic medical records from the National Healthcare Group Polyclinics, a major public sector primary care provider in Singapore, were randomly extracted and manually reviewed by 2 human reviewers, with a third reviewer as the adjudicator. The algorithm was evaluated based on 1680 notes against the human-coded result as the reference standard, with half of the data used for training and the other half for validation. The symptoms most commonly present within the 1680 clinical records at the episode level were those typically present in respiratory infections such as cough (744/7703, 9.66%), sore throat (591/7703, 7.67%), rhinorrhea (552/7703, 7.17%), and fever (928/7703, 12.04%). At the episode level, CHESS had an overall performance of 96.7% precision and 97.6% recall on the training dataset and 96.0% precision and 93.1% recall on the validation dataset. Symptoms suggesting respiratory and gastrointestinal infections were all detected with more than 90% precision and recall. CHESS correctly assigned the assertion status in 97.3%, 97.9%, and 89.8% of affirmed, negated, and suspected signs and symptoms, respectively (97.6% overall accuracy). Symptom episode duration was correctly identified in 81.2% of records with known duration status. We have developed an natural language processing algorithm dubbed CHESS that achieves good performance in extracting signs and symptoms from primary care free-text clinical records. In addition to the presence of symptoms, our algorithm can also accurately distinguish affirmed, negated, and suspected assertion statuses and extract symptom durations. ©Antony Hardjojo, Arunan Gunachandran, Long Pang, Mohammed Ridzwan Bin Abdullah, Win Wah, Joash Wen Chen Chong, Ee Hui Goh, Sok Huang Teo, Gilbert Lim, Mong Li Lee, Wynne Hsu, Vernon Lee, Mark I-Cheng Chen, Franco Wong, Jonathan Siung King Phang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 11.06.2018.
Two frameworks for integrating knowledge in induction
NASA Technical Reports Server (NTRS)
Rosenbloom, Paul S.; Hirsh, Haym; Cohen, William W.; Smith, Benjamin D.
1994-01-01
The use of knowledge in inductive learning is critical for improving the quality of the concept definitions generated, reducing the number of examples required in order to learn effective concept definitions, and reducing the computation needed to find good concept definitions. Relevant knowledge may come in many forms (such as examples, descriptions, advice, and constraints) and from many sources (such as books, teachers, databases, and scientific instruments). How to extract the relevant knowledge from this plethora of possibilities, and then to integrate it together so as to appropriately affect the induction process is perhaps the key issue at this point in inductive learning. Here the focus is on the integration part of this problem; that is, how induction algorithms can, and do, utilize a range of extracted knowledge. Preliminary work on a transformational framework for defining knowledge-intensive inductive algorithms out of relatively knowledge-free algorithms is described, as is a more tentative problems-space framework that attempts to cover all induction algorithms within a single general approach. These frameworks help to organize what is known about current knowledge-intensive induction algorithms, and to point towards new algorithms.
Text Extraction from Scene Images by Character Appearance and Structure Modeling
Yi, Chucai; Tian, Yingli
2012-01-01
In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111
NASA Astrophysics Data System (ADS)
Lee, Chulsung; Hsu, Dennis J.; Le, Michael H.; Darling, Cynthia L.; Fried, Daniel
2009-02-01
Previous studies have demonstrated that Polarization Sensitive Optical Coherence Tomography (PS-OCT) can be used to image the remineralization of early artificial caries lesion on smooth enamel surfaces of human and bovine teeth. However, most new dental decay is found in the pits and fissures of the occlusal surfaces of posterior dentition and it is in these high risk areas where the performance of new caries imaging devices need to be investigated. The purpose of this study was to demonstrate that PS-OCT can be used to measure the subsequent remineralization of artificial lesions produced in the pits and fissures of extracted 3rd molars. A PS-OCT system operating at 1310-nm was used to acquire polarization resolved images of occlusal surfaces exposed to a demineralizing solution at pH-4.5 followed by a fluoride containing remineralizing solution at pH-7.0 containing 2-ppm fluoride. The integrated reflectivity was calculated to a depth of 200-µm in the entire lesion area using an automated image processing algorithm. Although a well-defined surface zone was clearly resolved in only a few of the samples that underwent remineralization, the PS-OCT measurements indicated a significant (p<0.05) reduction in the integrated reflectivity between the severity of the lesions that were exposed to the remineralization solution and those that were not. The lesion depth and mineral loss were also measured with polarized light microscopy and transverse microradiography after sectioning the teeth. These results show that PS-OCT can be used to non-destructively monitor the remineralization potential of anti-caries agents in the important pits and fissures of the occlusal surface.
Bejan, Cosmin Adrian; Wei, Wei-Qi; Denny, Joshua C
2015-01-01
Objective To evaluate the contribution of the MEDication Indication (MEDI) resource and SemRep for identifying treatment relations in clinical text. Materials and methods We first processed clinical documents with SemRep to extract the Unified Medical Language System (UMLS) concepts and the treatment relations between them. Then, we incorporated MEDI into a simple algorithm that identifies treatment relations between two concepts if they match a medication-indication pair in this resource. For a better coverage, we expanded MEDI using ontology relationships from RxNorm and UMLS Metathesaurus. We also developed two ensemble methods, which combined the predictions of SemRep and the MEDI algorithm. We evaluated our selected methods on two datasets, a Vanderbilt corpus of 6864 discharge summaries and the 2010 Informatics for Integrating Biology and the Bedside (i2b2)/Veteran's Affairs (VA) challenge dataset. Results The Vanderbilt dataset included 958 manually annotated treatment relations. A double annotation was performed on 25% of relations with high agreement (Cohen's κ = 0.86). The evaluation consisted of comparing the manual annotated relations with the relations identified by SemRep, the MEDI algorithm, and the two ensemble methods. On the first dataset, the best F1-measure results achieved by the MEDI algorithm and the union of the two resources (78.7 and 80, respectively) were significantly higher than the SemRep results (72.3). On the second dataset, the MEDI algorithm achieved better precision and significantly lower recall values than the best system in the i2b2 challenge. The two systems obtained comparable F1-measure values on the subset of i2b2 relations with both arguments in MEDI. Conclusions Both SemRep and MEDI can be used to extract treatment relations from clinical text. Knowledge-based extraction with MEDI outperformed use of SemRep alone, but superior performance was achieved by integrating both systems. The integration of knowledge-based resources such as MEDI into information extraction systems such as SemRep and the i2b2 relation extractors may improve treatment relation extraction from clinical text. PMID:25336593
Fetal ECG extraction via Type-2 adaptive neuro-fuzzy inference systems.
Ahmadieh, Hajar; Asl, Babak Mohammadzadeh
2017-04-01
We proposed a noninvasive method for separating the fetal ECG (FECG) from maternal ECG (MECG) by using Type-2 adaptive neuro-fuzzy inference systems. The method can extract FECG components from abdominal signal by using one abdominal channel, including maternal and fetal cardiac signals and other environmental noise signals, and one chest channel. The proposed algorithm detects the nonlinear dynamics of the mother's body. So, the components of the MECG are estimated from the abdominal signal. By subtracting estimated mother cardiac signal from abdominal signal, fetal cardiac signal can be extracted. This algorithm was applied on synthetic ECG signals generated based on the models developed by McSharry et al. and Behar et al. and also on DaISy real database. In environments with high uncertainty, our method performs better than the Type-1 fuzzy method. Specifically, in evaluation of the algorithm with the synthetic data based on McSharry model, for input signals with SNR of -5dB, the SNR of the extracted FECG was improved by 38.38% in comparison with the Type-1 fuzzy method. Also, the results show that increasing the uncertainty or decreasing the input SNR leads to increasing the percentage of the improvement in SNR of the extracted FECG. For instance, when the SNR of the input signal decreases to -30dB, our proposed algorithm improves the SNR of the extracted FECG by 71.06% with respect to the Type-1 fuzzy method. The same results were obtained on synthetic data based on Behar model. Our results on real database reflect the success of the proposed method to separate the maternal and fetal heart signals even if their waves overlap in time. Moreover, the proposed algorithm was applied to the simulated fetal ECG with ectopic beats and achieved good results in separating FECG from MECG. The results show the superiority of the proposed Type-2 neuro-fuzzy inference method over the Type-1 neuro-fuzzy inference and the polynomial networks methods, which is due to its capability to capture the nonlinearities of the model better. Copyright © 2017 Elsevier B.V. All rights reserved.
A pratical deconvolution algorithm in multi-fiber spectra extraction
NASA Astrophysics Data System (ADS)
Zhang, Haotong; Li, Guangwei; Bai, Zhongrui
2015-08-01
Deconvolution algorithm is a very promising method in multi-fiber spectroscopy data reduction, the method can extract spectra to the photo noise level as well as improve the spectral resolution, but as mentioned in Bolton & Schlegel (2010), it is limited by its huge computation requirement and thus can not be implemented directly in actual data reduction. We develop a practical algorithm to solve the computation problem. The new algorithm can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. We further consider the influence of noise, which is thought to be an intrinsic ill-posed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. A series of simulations based on LAMOST data are carried out to test our method under more real situations with poisson noise and extreme cross talk, i.e., the fiber-to-fiber distance is comparable to the FWHM of the fiber profile. Compared with the results of traditional extraction methods, i.e., the Aperture Extraction Method and the Profile Fitting Method, our method shows both higher S/N and spectral resolution. The computaion time for a noise added image with 250 fibers and 4k pixels in wavelength direction, is about 2 hours when the fiber cross talk is not in the extreme case and 3.5 hours for the extreme fiber cross talk. We finally apply our method to real LAMOST data. We find that the 1D spectrum extracted by our method has both higher SNR and resolution than the traditional methods, but there are still some suspicious weak features possibly caused by the noise sensitivity of the method around the strong emission lines. How to further attenuate the noise influence will be the topic of our future work. As we have demonstrated, multi-fiber spectra extracted by our method will have higher resolution and signal to noise ratio thus will provide more accurate information (such as higher radial velocity and metallicity measurement accuracy in stellar physics) to astronomers than traditional methods.
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
Study of Fission Barrier Heights of Uranium Isotopes by the Macroscopic-Microscopic Method
NASA Astrophysics Data System (ADS)
Zhong, Chun-Lai; Fan, Tie-Shuan
2014-09-01
Potential energy surfaces of uranium nuclei in the range of mass numbers 229 through 244 are investigated in the framework of the macroscopic-microscopic model and the heights of static fission barriers are obtained in terms of a double-humped structure. The macroscopic part of the nuclear energy is calculated according to Lublin—Strasbourg-drop (LSD) model. Shell and pairing corrections as the microscopic part are calculated with a folded-Yukawa single-particle potential. The calculation is carried out in a five-dimensional parameter space of the generalized Lawrence shapes. In order to extract saddle points on the potential energy surface, a new algorithm which can effectively find an optimal fission path leading from the ground state to the scission point is developed. The comparison of our results with available experimental data and others' theoretical results confirms the reliability of our calculations.
Surface enhanced Raman spectroscopy for urinary tract infection diagnosis and antibiogram
NASA Astrophysics Data System (ADS)
Kastanos, Evdokia; Hadjigeorgiou, Katerina; Kyriakides, Alexandros; Pitris, Constantinos
2010-02-01
Urinary tract infection diagnosis and antibiogram require a minimum of 48 hours using standard laboratory practice. This long waiting period contributes to an increase in recurrent infections, rising health care costs, and a growing number of bacterial strains developing resistance to antibiotics. In this work, Surface Enhanced Raman Spectroscopy (SERS) was used as a novel method for classifying bacteria and determining their antibiogram. Five species of bacteria were classified with > 90% accuracy using their SERS spectra and a classification algorithm involving novel feature extraction and discriminant analysis. Antibiotic resistance or sensitivity was determined after just a two-hour exposure of bacteria to ciprofloxacin (sensitive) and amoxicillin (resistant) and analysis of their SERS spectra. These results can become the basis for the development of a novel method that would provide same day diagnosis and selection of the most appropriate antibiotic for most effective treatment of a urinary tract infection.
Measurement of Vibrated Bulk Density of Coke Particle Blends Using Image Texture Analysis
NASA Astrophysics Data System (ADS)
Azari, Kamran; Bogoya-Forero, Wilinthon; Duchesne, Carl; Tessier, Jayson
2017-09-01
A rapid and nondestructive machine vision sensor was developed for predicting the vibrated bulk density (VBD) of petroleum coke particles based on image texture analysis. It could be used for making corrective adjustments to a paste plant operation to reduce green anode variability (e.g., changes in binder demand). Wavelet texture analysis (WTA) and gray level co-occurrence matrix (GLCM) algorithms were used jointly for extracting the surface textural features of coke aggregates from images. These were correlated with the VBD using partial least-squares (PLS) regression. Coke samples of several sizes and from different sources were used to test the sensor. Variations in the coke surface texture introduced by coke size and source allowed for making good predictions of the VBD of individual coke samples and mixtures of them (blends involving two sources and different sizes). Promising results were also obtained for coke blends collected from an industrial-baked carbon anode manufacturer.
Geometry modeling and grid generation using 3D NURBS control volume
NASA Technical Reports Server (NTRS)
Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin
1995-01-01
The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.
Historical feature pattern extraction based network attack situation sensing algorithm.
Zeng, Yong; Liu, Dacheng; Lei, Zhou
2014-01-01
The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.
Crosswalk navigation for people with visual impairments on a wearable device
NASA Astrophysics Data System (ADS)
Cheng, Ruiqi; Wang, Kaiwei; Yang, Kailun; Long, Ningbo; Hu, Weijian; Chen, Hao; Bai, Jian; Liu, Dong
2017-09-01
Detecting and reminding of crosswalks at urban intersections is one of the most important demands for people with visual impairments. A real-time crosswalk detection algorithm, adaptive extraction and consistency analysis (AECA), is proposed. Compared with existing algorithms, which detect crosswalks in ideal scenarios, the AECA algorithm performs better in challenging scenarios, such as crosswalks at far distances, low-contrast crosswalks, pedestrian occlusion, various illuminances, and the limited resources of portable PCs. Bright stripes of crosswalks are extracted by adaptive thresholding, and are gathered to form crosswalks by consistency analysis. On the testing dataset, the proposed algorithm achieves a precision of 84.6% and a recall of 60.1%, which are higher than the bipolarity-based algorithm. The position and orientation of crosswalks are conveyed to users by voice prompts so as to align themselves with crosswalks and walk along crosswalks. The field tests carried out in various practical scenarios prove the effectiveness and reliability of the proposed navigation approach.
Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm
Zeng, Yong; Liu, Dacheng; Lei, Zhou
2014-01-01
The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously. PMID:24892054
Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.
Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B
2011-02-01
This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.