Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Csf Based Non-Ground Points Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Shen, A.; Zhang, W.; Shi, H.
2017-09-01
Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points
LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings
NASA Astrophysics Data System (ADS)
Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan
2018-01-01
This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.
An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Y.; Hu, X.; Guan, H.; Liu, P.
2016-06-01
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.
Automatic drawing for traffic marking with MMS LIDAR intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Shimano, Y.
2014-05-01
Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.
Applications of 3D-EDGE Detection for ALS Point Cloud
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.
A method for automatic feature points extraction of human vertebrae three-dimensional model
NASA Astrophysics Data System (ADS)
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud
NASA Astrophysics Data System (ADS)
Chen, Jianqin; Zhu, Hehua; Li, Xiaojun
2016-10-01
This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.
Research on Methods of High Coherent Target Extraction in Urban Area Based on Psinsar Technology
NASA Astrophysics Data System (ADS)
Li, N.; Wu, J.
2018-04-01
PSInSAR technology has been widely applied in ground deformation monitoring. Accurate identification of Persistent Scatterers (PS) is key to the success of PSInSAR data processing. In this paper, the theoretic models and specific algorithms of PS point extraction methods are summarized and the characteristics and applicable conditions of each method, such as Coherence Coefficient Threshold method, Amplitude Threshold method, Dispersion of Amplitude method, Dispersion of Intensity method, are analyzed. Based on the merits and demerits of different methods, an improved method for PS point extraction in urban area is proposed, that uses simultaneously backscattering characteristic, amplitude and phase stability to find PS point in all pixels. Shanghai city is chosen as an example area for checking the improvements of the new method. The results show that the PS points extracted by the new method have high quality, high stability and meet the strong scattering characteristics. Based on these high quality PS points, the deformation rate along the line-of-sight (LOS) in the central urban area of Shanghai is obtained by using 35 COSMO-SkyMed X-band SAR images acquired from 2008 to 2010 and it varies from -14.6 mm/year to 4.9 mm/year. There is a large sedimentation funnel in the cross boundary of Hongkou and Yangpu district with a maximum sedimentation rate of more than 14 mm per year. The obtained ground subsidence rates are also compared with the result of spirit leveling and show good consistent. Our new method for PS point extraction is more reasonable, and can improve the accuracy of the obtained deformation results.
Galhiane, Mário S; Rissato, Sandra R; Chierice, Gilberto O; Almeida, Marcos V; Silva, Letícia C
2006-09-15
This work has been developed using a sylvestral fruit tree, native to the Brazilian forest, the Eugenia uniflora L., one of the Mirtaceae family. The main goal of the analytical study was focused on extraction methods themselves. The method development pointed to the Clevenger extraction as the best yield in relation to SFE and Soxhlet. The SFE method presented a good yield but showed a big amount of components in the final extract, demonstrating low selectivity. The essential oil extracted was analyzed by GC/FID showing a large range of polarity and boiling point compounds, where linalool, a widely used compound, was identified. Furthermore, an analytical solid phase extraction method was used to clean it up and obtain separated classes of compounds that were fractionated and studied by GC/FID and GC/MS.
The algorithm of fast image stitching based on multi-feature extraction
NASA Astrophysics Data System (ADS)
Yang, Chunde; Wu, Ge; Shi, Jing
2018-05-01
This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.
GPU surface extraction using the closest point embedding
NASA Astrophysics Data System (ADS)
Kim, Mark; Hansen, Charles
2015-01-01
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Exact extraction method for road rutting laser lines
NASA Astrophysics Data System (ADS)
Hong, Zhiming
2018-02-01
This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.
A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.
Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun
2016-07-19
Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.
A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery
Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun
2016-01-01
Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631
Automatic Extraction of Road Markings from Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.
2017-09-01
Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
NASA Astrophysics Data System (ADS)
Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.
2018-05-01
In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.
NASA Astrophysics Data System (ADS)
Cong, Chao; Liu, Dingsheng; Zhao, Lingjun
2008-12-01
This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.
Zhu, Hai-Zhen; Liu, Wei; Mao, Jian-Wei; Yang, Ming-Min
2008-04-28
4-Amino-4'-nitrobiphenyl, which is formed by catalytic effect of trichlorfon on sodium perborate oxidizing benzidine, is extracted with a cloud point extraction method and then detected using a high performance liquid chromatography with ultraviolet detection (HPLC-UV). Under the optimum experimental conditions, there was a linear relationship between trichlorfon in the concentration range of 0.01-0.2 mgL(-1) and the peak areas of 4-amino-4'-nitrobiphenyl (r=0.996). Limit of detection was 2.0 microgL(-1), recoveries of spiked water and cabbage samples ranged between 95.4-103 and 85.2-91.2%, respectively. It was proved that the cloud point extraction (CPE) method was simple, cheap, and environment friendly than extraction with organic solvents and had more effective extraction yield.
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System
NASA Astrophysics Data System (ADS)
Chan, T. O.; Lichti, D. D.; Belton, D.
2013-10-01
At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
Current Nucleic Acid Extraction Methods and Their Implications to Point-of-Care Diagnostics.
Ali, Nasir; Rampazzo, Rita de Cássia Pontello; Costa, Alexandre Dias Tavares; Krieger, Marco Aurelio
2017-01-01
Nucleic acid extraction (NAE) plays a vital role in molecular biology as the primary step for many downstream applications. Many modifications have been introduced to the original 1869 method. Modern processes are categorized into chemical or mechanical, each with peculiarities that influence their use, especially in point-of-care diagnostics (POC-Dx). POC-Dx is a new approach aiming to replace sophisticated analytical machinery with microanalytical systems, able to be used near the patient, at the point of care or point of need . Although notable efforts have been made, a simple and effective extraction method is still a major challenge for widespread use of POC-Dx. In this review, we dissected the working principle of each of the most common NAE methods, overviewing their advantages and disadvantages, as well their potential for integration in POC-Dx systems. At present, it seems difficult, if not impossible, to establish a procedure which can be universally applied to POC-Dx. We also discuss the effects of the NAE chemicals upon the main plastic polymers used to mass produce POC-Dx systems. We end our review discussing the limitations and challenges that should guide the quest for an efficient extraction method that can be integrated in a POC-Dx system.
Current Nucleic Acid Extraction Methods and Their Implications to Point-of-Care Diagnostics
Ali, Nasir; Rampazzo, Rita de Cássia Pontello; Krieger, Marco Aurelio
2017-01-01
Nucleic acid extraction (NAE) plays a vital role in molecular biology as the primary step for many downstream applications. Many modifications have been introduced to the original 1869 method. Modern processes are categorized into chemical or mechanical, each with peculiarities that influence their use, especially in point-of-care diagnostics (POC-Dx). POC-Dx is a new approach aiming to replace sophisticated analytical machinery with microanalytical systems, able to be used near the patient, at the point of care or point of need. Although notable efforts have been made, a simple and effective extraction method is still a major challenge for widespread use of POC-Dx. In this review, we dissected the working principle of each of the most common NAE methods, overviewing their advantages and disadvantages, as well their potential for integration in POC-Dx systems. At present, it seems difficult, if not impossible, to establish a procedure which can be universally applied to POC-Dx. We also discuss the effects of the NAE chemicals upon the main plastic polymers used to mass produce POC-Dx systems. We end our review discussing the limitations and challenges that should guide the quest for an efficient extraction method that can be integrated in a POC-Dx system. PMID:28785592
Reference point detection for camera-based fingerprint image based on wavelet transformation.
Khalil, Mohammed S
2015-04-30
Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.
Analysis of separation test for automatic brake adjuster based on linear radon transformation
NASA Astrophysics Data System (ADS)
Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi
2015-01-01
The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Cloud point extraction of Δ9-tetrahydrocannabinol from cannabis resin.
Ameur, S; Haddou, B; Derriche, Z; Canselier, J P; Gourdon, C
2013-04-01
A cloud point extraction coupled with high performance liquid chromatography (HPLC/UV) method was developed for the determination of Δ(9)-tetrahydrocannabinol (THC) in micellar phase. The nonionic surfactant "Dowfax 20B102" was used to extract and pre-concentrate THC from cannabis resin, prior to its determination with a HPLC-UV system (diode array detector) with isocratic elution. The parameters and variables affecting the extraction were investigated. Under optimum conditions (1 wt.% Dowfax 20B102, 1 wt.% Na2SO4, T = 318 K, t = 30 min), this method yielded a quite satisfactory recovery rate (~81 %). The limit of detection was 0.04 μg mL(-1), and the relative standard deviation was less than 2 %. Compared with conventional solid-liquid extraction, this new method avoids the use of volatile organic solvents, therefore is environmentally safer.
The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)
NASA Astrophysics Data System (ADS)
Kuçak, R. A.; Özdemir, E.; Erol, S.
2017-05-01
Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.
Detection and Classification of Pole-Like Objects from Mobile Mapping Data
NASA Astrophysics Data System (ADS)
Fukano, K.; Masuda, H.
2015-08-01
Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.
Method for contour extraction for object representation
Skourikhine, Alexei N.; Prasad, Lakshman
2005-08-30
Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.
Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR
NASA Astrophysics Data System (ADS)
Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin
2017-08-01
Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.
Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan
2018-06-05
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-01-01
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Extraction of Extended Small-Scale Objects in Digital Images
NASA Astrophysics Data System (ADS)
Volkov, V. Y.
2015-05-01
Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
Automatic extraction of blocks from 3D point clouds of fractured rock
NASA Astrophysics Data System (ADS)
Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen
2017-12-01
This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
A novel method of measuring the melting point of animal fats.
Lloyd, S S; Dawkins, S T; Dawkins, R L
2014-10-01
The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.
Study on the traditional pattern retrieval method of minorities in Gansu province
NASA Astrophysics Data System (ADS)
Zheng, Gang; Wang, Beizhan; Sun, Yuchun; Xu, Jin
2018-03-01
The traditional patterns of ethnic minorities in gansu province are ethnic arts with strong ethnic characteristics. It is the crystallization of the hard work and wisdom of minority nationalities in gansu province. Unique traditional patterns of ethnic minorities in Gansu province with rich ethnic folk arts, is the crystallization of geographical environment in Gansu minority diligence and wisdom. By using the Surf feature point identification algorithm, the feature point extractor in OpenCV is used to extract the feature points. And the feature points are applied to compare the pattern features to find patterns similar to the artistic features. The application of this method can quickly or efficiently extract pattern information in a database.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467
Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Automatic extraction of protein point mutations using a graph bigram association.
Lee, Lawrence C; Horn, Florence; Cohen, Fred E
2007-02-02
Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram), that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR), tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76), protein tyrosine kinases (0.72 versus 0.69), and ion channel transporters (0.76 versus 0.74). Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.
NASA Astrophysics Data System (ADS)
Zhang, Yuanyuan; Gao, Zhiqiang; Liu, Xiangyang; Xu, Ning; Liu, Chaoshun; Gao, Wei
2016-09-01
Reclamation caused a significant dynamic change in the coastal zone, the tidal flat zone is an unstable reserve land resource, it has important significance for its research. In order to realize the efficient extraction of the tidal flat area information, this paper takes Rudong County in Jiangsu Province as the research area, using the HJ1A/1B images as the data source, on the basis of previous research experience and literature review, the paper chooses the method of object-oriented classification as a semi-automatic extraction method to generate waterlines. Then waterlines are analyzed by DSAS software to obtain tide points, automatic extraction of outer boundary points are followed under the use of Python to determine the extent of tidal flats in 2014 of Rudong County, the extraction area was 55182hm2, the confusion matrix is used to verify the accuracy and the result shows that the kappa coefficient is 0.945. The method could improve deficiencies of previous studies and its available free nature on the Internet makes a generalization.
A method for the solvent extraction of low-boiling-point plant volatiles.
Xu, Ning; Gruber, Margaret; Westcott, Neil; Soroka, Julie; Parkin, Isobel; Hegedus, Dwayne
2005-01-01
A new method has been developed for the extraction of volatiles from plant materials and tested on seedling tissue and mature leaves of Arabidopsis thaliana, pine needles and commercial mixtures of plant volatiles. Volatiles were extracted with n-pentane and then subjected to quick distillation at a moderate temperature. Under these conditions, compounds such as pigments, waxes and non-volatile compounds remained undistilled, while short-chain volatile compounds were distilled into a receiving flask using a high-efficiency condenser. Removal of the n-pentane and concentration of the volatiles in the receiving flask was carried out using a Vigreux column condenser prior to GC-MS. The method is ideal for the rapid extraction of low-boiling-point volatiles from small amounts of plant material, such as is required when conducting metabolic profiling or defining biological properties of volatile components from large numbers of mutant lines.
Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan
2017-06-01
Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
Comparison of results from simple expressions for MOSFET parameter extraction
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Lin, Y.-S.
1988-01-01
In this paper results are compared from a parameter extraction procedure applied to the linear, saturation, and subthreshold regions for enhancement-mode MOSFETs fabricated in a 3-micron CMOS process. The results indicate that the extracted parameters differ significantly depending on the extraction algorithm and the distribution of I-V data points. It was observed that KP values vary by 30 percent, VT values differ by 50 mV, and Delta L values differ by 1 micron. Thus for acceptance of wafers from foundries and for modeling purposes, the extraction method and data point distribution must be specified. In this paper measurement and extraction procedures that will allow a consistent evaluation of measured parameters are discussed.
Fast title extraction method for business documents
NASA Astrophysics Data System (ADS)
Katsuyama, Yutaka; Naoi, Satoshi
1997-04-01
Conventional electronic document filing systems are inconvenient because the user must specify the keywords in each document for later searches. To solve this problem, automatic keyword extraction methods using natural language processing and character recognition have been developed. However, these methods are slow, especially for japanese documents. To develop a practical electronic document filing system, we focused on the extraction of keyword areas from a document by image processing. Our fast title extraction method can automatically extract titles as keywords from business documents. All character strings are evaluated for similarity by rating points associated with title similarity. We classified these points as four items: character sitting size, position of character strings, relative position among character strings, and string attribution. Finally, the character string that has the highest rating is selected as the title area. The character recognition process is carried out on the selected area. It is fast because this process must recognize a small number of patterns in the restricted area only, and not throughout the entire document. The mean performance of this method is an accuracy of about 91 percent and a 1.8 sec. processing time for an examination of 100 Japanese business documents.
Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danny L. Anderson
Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates amore » new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.« less
Extracting cross sections and water levels of vegetated ditches from LiDAR point clouds
NASA Astrophysics Data System (ADS)
Roelens, Jennifer; Dondeyne, Stefaan; Van Orshoven, Jos; Diels, Jan
2016-12-01
The hydrologic response of a catchment is sensitive to the morphology of the drainage network. Dimensions of bigger channels are usually well known, however, geometrical data for man-made ditches is often missing as there are many and small. Aerial LiDAR data offers the possibility to extract these small geometrical features. Analysing the three-dimensional point clouds directly will maintain the highest degree of information. A longitudinal and cross-sectional buffer were used to extract the cross-sectional profile points from the LiDAR point cloud. The profile was represented by spline functions fitted through the minimum envelop of the extracted points. The cross-sectional ditch profiles were classified for the presence of water and vegetation based on the normalized difference water index and the spatial characteristics of the points along the profile. The normalized difference water index was created using the RGB and intensity data coupled to the LiDAR points. The mean vertical deviation of 0.14 m found between the extracted and reference cross sections could mainly be attributed to the occurrence of water and partly to vegetation on the banks. In contrast to the cross-sectional area, the extracted width was not influenced by the environment (coefficient of determination R2 = 0.87). Water and vegetation influenced the extracted ditch characteristics, but the proposed method is still robust and therefore facilitates input data acquisition and improves accuracy of spatially explicit hydrological models.
Kachangoon, Rawikan; Vichapong, Jitlada; Burakham, Rodjana; Santaladchaiyakit, Yanawath; Srijaranai, Supalax
2018-05-12
An effective pre-concentration method, namely amended-cloud point extraction (CPE), has been developed for the extraction and pre-concentration of neonicotinoid insecticide residues. The studied analytes including clothianidin, imidacloprid, acetamiprid, thiamethoxam and thiacloprid were chosen as a model compound. The amended-CPE procedure included two cloud point processes. Triton™ X-114 was used to extract neonicotinoid residues into the surfactant-rich phase and then the analytes were transferred into an alkaline solution with the help of ultrasound energy. The extracts were then analyzed by high-performance liquid chromatography (HPLC) coupled with a monolithic column. Several factors influencing the extraction efficiency were studied such as kind and concentration of surfactant, type and content of salts, kind and concentration of back extraction agent, and incubation temperature and time. Enrichment factors (EFs) were found in the range of 20⁻333 folds. The limits of detection of the studied neonicotinoids were in the range of 0.0003⁻0.002 µg mL −1 which are below the maximum residue limits (MRLs) established by the European Union (EU). Good repeatability was obtained with relative standard deviations lower than 1.92% and 4.54% for retention time ( t R ) and peak area, respectively. The developed extraction method was successfully applied for the analysis of water samples. No detectable residues of neonicotinoids in the studied samples were found.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Shi, Zhihong; Zhu, Xiaomin; Zhang, Hongyi
2007-08-15
In this paper, a micelle-mediated extraction and cloud point preconcentration method was developed for the determination of less hydrophobic compounds aesculin and aesculetin in Cortex fraxini by HPLC. Non-ionic surfactant oligoethylene glycol monoalkyl ether (Genapol X-080) was employed as the extraction solvent. Various experimental conditions were investigated to optimize the extraction process. Under optimum conditions, i.e. 5% Genapol X-080 (w/v), pH 1.0, liquid/solid ratio of 400:1 (ml/g), ultrasonic-assisted extraction for 30 min, the extraction yield reached the highest value. For the preconcentration of aesculin and aesculetin by cloud point extraction (CPE), the solution was incubated in a thermostatic water bath at 55 degrees C for 30 min, and 20% NaCl (w/v) was added to the solution to facilitate the phase separation and increase the preconcentration factor during the CPE process. Compared with methanol, which was used in Chinese Pharmacopoeia (2005 edition) for the extraction of C. fraxini, the extraction efficiency of 5% Genapol X-080 reached higher value.
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.
The Extraction of Terrace in the Loess Plateau Based on radial method
NASA Astrophysics Data System (ADS)
Liu, W.; Li, F.
2016-12-01
The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
Text extraction method for historical Tibetan document images based on block projections
NASA Astrophysics Data System (ADS)
Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian
2017-11-01
Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz
2015-01-01
In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira
2009-09-28
The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.
NASA Astrophysics Data System (ADS)
Yang, C. H.; Kenduiywo, B. K.; Soergel, U.
2016-06-01
Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Automated control of robotic camera tacheometers for measurements of industrial large scale objects
NASA Astrophysics Data System (ADS)
Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani
2013-04-01
The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.
Rahimi, Marzieh; Hashemi, Payman; Nazari, Fariba
2014-05-15
A cold column trapping-cloud point extraction (CCT-CPE) method coupled to high performance liquid chromatography (HPLC) was developed for preconcentration and determination of curcumin in human urine. A nonionic surfactant, Triton X-100, was used as the extraction medium. In the proposed method, a low surfactant concentration of 0.4% v/v and a short heating time of only 2min at 70°C were sufficient for quantitative extraction of the analyte. For the separation of the extraction phase, the resulted cloudy solution was passed through a packed trapping column that was cooled to 0 °C. The temperature of the CCT column was then increased to 25°C and the surfactant rich phase was desorbed with 400μL ethanol to be directly injected into HPLC for the analysis. The effects of different variables such as pH, surfactant concentration, cloud point temperature and time were investigated and optimum conditions were established by a central composite design (response surface) method. A limit of detection of 0.066mgL(-1) curcumin and a linear range of 0.22-100mgL(-1) with a determination coefficient of 0.9998 were obtained for the method. The average recovery and relative standard deviation for six replicated analysis were 101.0% and 2.77%, respectively. The CCT-CPE technique was faster than a conventional CPE method requiring a lower concentration of the surfactant and lower temperatures with no need for the centrifugation. The proposed method was successfully applied to the analysis of curcumin in human urine samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-01
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.
A quantitative evaluation of two methods for preserving hair samples
Roon, David A.; Waits, L.P.; Kendall, K.C.
2003-01-01
Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.
Automatic pole-like object modeling via 3D part-based analysis of point cloud
NASA Astrophysics Data System (ADS)
He, Liu; Yang, Haoxiang; Huang, Yuchun
2016-10-01
Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
[Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].
He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo
2010-01-01
To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Sinhal, Tapati Manohar; Shah, Ruchi Rani Purvesh; Jais, Pratik Subhas; Shah, Nimisha Chinmay; Hadwani, Krupali Dhirubhai; Rothe, Tushar; Sinhal, Neha Nilesh
2018-01-01
The aim of this study is to compare and to evaluate sealing ability of newly introduced C-point system, cold lateral condensation, and thermoplasticized gutta-percha obturating technique using a dye extraction method. Sixty extracted maxillary central incisors were decoronated below the cementoenamel junction. Working length was established, and biomechanical preparation was done using K3 rotary files with standard irrigation protocol. Teeth were divided into three groups according to the obturation protocol; Group I-Cold lateral condensation, Group II-Thermoplasticized gutta-percha, and Group III-C-Point obturating system. After obturation all samples were subjected to microleakage assessment using dye extraction method. Obtained scores will be statistical analyzed using ANOVA test and post hoc Tukey's test. One-way analysis of variance revealed that there is significant difference among the three groups with P value (0.000 < 0.05). Tukey's HSD post hoc tests for multiple comparisons test shows that the Group II and III perform significantly better than Group I. Group III performs better than Group II with no significant difference. All the obturating technique showed some degree of microleakage. Root canals filled with C-point system showed least microleakage followed by thermoplasticized obturating technique with no significant difference among them. C-point obturation system could be an alternative to the cold lateral condensation technique.
Yue, Chun-Hua; Zheng, Li-Tao; Guo, Qi-Ming; Li, Kun-Ping
2014-05-01
To establish a new method for the extraction and separation of curcuminoids from Curcuma longa rhizome by cloud-point preconcentration using microemulsions as solvent. The spectrophotometry was used to detect the solubility of curcumin in different oil phase, emulsifier and auxiliary emulsifier, and the microemulsion prescription was used for false three-phase figure optimization. The extraction process was optimized by uniform experiment design. The curcuminoids were separated from microemulsion extract by cloud-point preconcentration. Oil phase was oleic acid ethyl ester; Emulsifier was OP emulsifier; Auxiliary emulsifier was polyethylene glycol(peg) 400; The quantity of emulsifier to auxiliary emulsifier was the ratio of 5: 1; Microemulsion prescription was water-oleic acid ethyl ester-mixed emulsifier (0.45:0.1:0.45). The optimum extraction process was: time for 12.5 min, temperature of 52 degrees C, power of 360 W, frequency of 400 kHz, and the liquid-solid ratio of 40:1. The extraction rate of curcuminoids was 92.17% and 86.85% in microemulsion and oil phase, respectively. Curcuminoids is soluble in this microemulsion prescription with good extraction rate. This method is simple and suitable for curcuminoids extraction from Curcuma longa rhizome.
Extraction and analysis of neuron firing signals from deep cortical video microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerekes, Ryan A; Blundon, Jay
We introduce a method for extracting and analyzing neuronal activity time signals from video of the cortex of a live animal. The signals correspond to the firing activity of individual cortical neurons. Activity signals are based on the changing fluorescence of calcium indicators in the cells over time. We propose a cell segmentation method that relies on a user-specified center point, from which the signal extraction method proceeds. A stabilization approach is used to reduce tissue motion in the video. The extracted signal is then processed to flatten the baseline and detect action potentials. We show results from applying themore » method to a cortical video of a live mouse.« less
Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar
2013-01-01
A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.
Real-time implementation of camera positioning algorithm based on FPGA & SOPC
NASA Astrophysics Data System (ADS)
Yang, Mingcao; Qiu, Yuehong
2014-09-01
In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.
Investigation of cloud point extraction for the analysis of metallic nanoparticles in a soil matrix
Hadri, Hind El; Hackley, Vincent A.
2017-01-01
The characterization of manufactured nanoparticles (MNPs) in environmental samples is necessary to assess their behavior, fate and potential toxicity. Several techniques are available, but the limit of detection (LOD) is often too high for environmentally relevant concentrations. Therefore, pre-concentration of MNPs is an important component in the sample preparation step, in order to apply analytical tools with a LOD higher than the ng kg−1 level. The objective of this study was to explore cloud point extraction (CPE) as a viable method to pre-concentrate gold nanoparticles (AuNPs), as a model MNP, spiked into a soil extract matrix. To that end, different extraction conditions and surface coatings were evaluated in a simple matrix. The CPE method was then applied to soil extract samples spiked with AuNPs. Total gold, determined by inductively coupled plasma mass spectrometry (ICP-MS) following acid digestion, yielded a recovery greater than 90 %. The first known application of single particle ICP-MS and asymmetric flow field-flow fractionation to evaluate the preservation of the AuNP physical state following CPE extraction is demonstrated. PMID:28507763
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.
Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less
Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium
Rusinek, Cory A.; Bange, Adam; Papautsky, Ian; Heineman, William R.
2016-01-01
Cloud point extraction (CPE) is a well-established technique for the pre-concentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-Vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd2+) by anodic stripping voltammetry (ASV) as a representative example. Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd2+ to form an extractable ion pair. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22–25° C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd2+ of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. Comparison of ASV analysis without CPE was also investigated and a 20x decrease (4.0 ppb) in the detection limit was observed. The suitability of this procedure for the analysis of tap and river water samples was also demonstrated. This simple, versatile, environmentally friendly and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods. PMID:25996561
Heydari, Rouhollah; Elyasi, Najmeh S
2014-10-01
A novel, simple, and effective ion-pair cloud-point extraction coupled with a gradient high-performance liquid chromatography method was developed for determination of thiamine (vitamin B1 ), niacinamide (vitamin B3 ), pyridoxine (vitamin B6 ), and riboflavin (vitamin B2 ) in plasma and urine samples. The extraction and separation of vitamins were achieved based on an ion-pair formation approach between these ionizable analytes and 1-heptanesulfonic acid sodium salt as an ion-pairing agent. Influential variables on the ion-pair cloud-point extraction efficiency, such as the ion-pairing agent concentration, ionic strength, pH, volume of Triton X-100, extraction temperature, and incubation time have been fully evaluated and optimized. Water-soluble vitamins were successfully extracted by 1-heptanesulfonic acid sodium salt (0.2% w/v) as ion-pairing agent with Triton X-100 (4% w/v) as surfactant phase at 50°C for 10 min. The calibration curves showed good linearity (r(2) > 0.9916) and precision in the concentration ranges of 1-50 μg/mL for thiamine and niacinamide, 5-100 μg/mL for pyridoxine, and 0.5-20 μg/mL for riboflavin. The recoveries were in the range of 78.0-88.0% with relative standard deviations ranging from 6.2 to 8.2%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
Chen, Ligang; Zhao, Qi; Jin, Haiyan; Zhang, Xiaopan; Xu, Yang; Yu, Aimin; Zhang, Hanqi; Ding, Lan
2010-04-15
A method based on coupling of cloud point extraction (CPE) with high performance liquid chromatography separation and ultraviolet detection was developed for determination of xanthohumol in beer. The nonionic surfactant Triton X-114 was chosen as the extraction medium. The parameters affecting the CPE were evaluated and optimized. The highest extraction yield of xanthohumol was obtained with 2.5% of Triton X-114 (v/v) at pH 5.0, 15% of sodium chloride (w/v), 70 degrees C of equilibrium temperature and 10 min of equilibrium time. Under these conditions, the limit of detection of xanthohumol is 0.003 mg L(-1). The intra- and inter-day precisions expressed as relative standard deviations are 4.6% and 6.3%, respectively. The proposed method was successfully applied for determination of xanthohumol in various beer samples. The contents of xanthohumol in these samples are in the range of 0.052-0.628 mg L(-1), and the recoveries ranging from 90.7% to 101.9% were obtained. The developed method was demonstrated to be efficient, green, rapid and inexpensive for extraction and determination of xanthohumol in beer. (c) 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghasemi, Elham; Kaykhaii, Massoud
2016-07-01
A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.
Ghasemi, Elham; Kaykhaii, Massoud
2016-07-05
A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60mg/L, 0.10-0.80mg/L, and 0.03-0.30mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1μg/L. Copyright © 2016 Elsevier B.V. All rights reserved.
Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-10
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.
Human Body 3D Posture Estimation Using Significant Points and Two Cameras
Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin
2014-01-01
This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422
Speech-Message Extraction from Interference Introduced by External Distributed Sources
NASA Astrophysics Data System (ADS)
Kanakov, V. A.; Mironov, N. A.
2017-08-01
The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.
Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method †
Li, Yuehua; Zhou, Jingbo; Huang, Fengshan; Liu, Lijian
2017-01-01
Laser stripe center extraction is a key step for the profile measurement of line structured light sensors (LSLS). To accurately obtain the center coordinates at sub-pixel level, an improved gray-gravity method (IGGM) was proposed. Firstly, the center points of the stripe were computed using the gray-gravity method (GGM) for all columns of the image. By fitting these points using the moving least squares algorithm, the tangential vector, the normal vector and the radius of curvature can be robustly obtained. One rectangular region could be defined around each of the center points. Its two sides that are parallel to the tangential vector could alter their lengths according to the radius of the curvature. After that, the coordinate for each center point was recalculated within the rectangular region and in the direction of the normal vector. The center uncertainty was also analyzed based on the Monte Carlo method. The obtained experimental results indicate that the IGGM is suitable for both the smooth stripes and the ones with sharp corners. The high accuracy center points can be obtained at a relatively low computation cost. The measured results of the stairs and the screw surface further demonstrate the effectiveness of the method. PMID:28394288
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.
Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T
2008-09-15
Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.
Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui
2012-01-01
In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079
Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data
NASA Astrophysics Data System (ADS)
Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.
2016-06-01
Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.
Xin, Li-Ping; Chai, Xin-Sheng; Hu, Hui-Chao; Barnes, Donald G
2014-09-05
This work demonstrates a novel method for rapid determination of total solid content in viscous liquid (polymer-enriched) samples. The method is based multiple headspace extraction gas chromatography (MHE-GC) on a headspace vial at a temperature above boiling point of water. Thus, the trend of water loss from the tested liquid due to evaporation can be followed. With the limited MHE-GC testing (e.g., 5 extractions) and a one-point calibration procedure (i.e., recording the weight difference before and after analysis), the total amount of water in the sample can be determined, from which the total solid contents in the liquid can be calculated. A number of black liquors were analyzed by the new method which yielded results that closely matched those of the reference method; i.e., the results of these two methods differed by no more than 2.3%. Compared with the reference method, the MHE-GC method is much simpler and more practical. Therefore, it is suitable for the rapid determination of the solid content in many polymer-containing liquid samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Detection of enteric viruses in shellfish
USDA-ARS?s Scientific Manuscript database
Norovirus and hepatitis A virus contamination are significant threats to the safety of shellfish and other foods. Methods for the extraction and assay of these viruses from shellfish are complex, time consuming, and technically challenging. Here, we itemize some of the salient points in extracting...
NASA Astrophysics Data System (ADS)
Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin
2015-03-01
In order to ensure safety, long term stability and quality control in modern tunneling operations, the acquisition of geotechnical information about encountered rock conditions and detailed installed support information is required. The limited space and time in an operational tunnel environment make the acquiring data challenging. The laser scanning in a tunneling environment, however, shows a great potential. The surveying and mapping of tunnels are crucial for the optimal use after construction and in routine inspections. Most of these applications focus on the geometric information of the tunnels extracted from the laser scanning data. There are two kinds of applications widely discussed: deformation measurement and feature extraction. The traditional deformation measurement in an underground environment is performed with a series of permanent control points installed around the profile of an excavation, which is unsuitable for a global consideration of the investigated area. Using laser scanning for deformation analysis provides many benefits as compared to traditional monitoring techniques. The change in profile is able to be fully characterized and the areas of the anomalous movement can easily be separated from overall trends due to the high density of the point cloud data. Furthermore, monitoring with a laser scanner does not require the permanent installation of control points, therefore the monitoring can be completed more quickly after excavation, and the scanning is non-contact, hence, no damage is done during the installation of temporary control points. The main drawback of using the laser scanning for deformation monitoring is that the point accuracy of the original data is generally the same magnitude as the smallest level of deformations that are to be measured. To overcome this, statistical techniques and three dimensional image processing techniques for the point clouds must be developed. For safely, effectively and easily control the problem of Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.
Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium.
Rusinek, Cory A; Bange, Adam; Papautsky, Ian; Heineman, William R
2015-06-16
Cloud point extraction (CPE) is a well-established technique for the preconcentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd(2+)) by anodic stripping voltammetry (ASV). Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd(2+) to form an extractable ion pair. This offers good selectivity for Cd(2+) as no interferences were observed from other heavy metal ions. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22-25 °C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd(2+) of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. ASV with CPE gave a 20x decrease (4.0 ppb) in the detection limit compared to ASV without CPE. The suitability of this procedure for the analysis of tap and river water samples was demonstrated. This simple, versatile, environmentally friendly, and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods.
Complex eigenvalue extraction in NASTRAN by the tridiagonal reduction (FEER) method
NASA Technical Reports Server (NTRS)
Newman, M.; Mann, F. I.
1977-01-01
An extension of the Tridiagonal Reduction (FEER) method to complex eigenvalue analysis in NASTRAN is described. As in the case of real eigenvalue analysis, the eigensolutions closest to a selected point in the eigenspectrum are extracted from a reduced, symmetric, tridiagonal eigenmatrix whose order is much lower than that of the full size problem. The reduction process is effected automatically, and thus avoids the arbitrary lumping of masses and other physical quantities at selected grid points. The statement of the algebraic eigenvalue problem admits mass, damping and stiffness matrices which are unrestricted in character, i.e., they may be real, complex, symmetric or unsymmetric, singular or non-singular.
Sun, Mei; Wu, Qianghua
2010-04-15
A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL(-1). The relative standard deviation (n=7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin. 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap
2018-04-01
In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.
Extraction of Features from High-resolution 3D LiDaR Point-cloud Data
NASA Astrophysics Data System (ADS)
Keller, P.; Kreylos, O.; Hamann, B.; Kellogg, L. H.; Cowgill, E. S.; Yikilmaz, M. B.; Hering-Bertram, M.; Hagen, H.
2008-12-01
Airborne and tripod-based LiDaR scans are capable of producing new insight into geologic features by providing high-quality 3D measurements of the landscape. High-resolution LiDaR is a promising method for studying slip on faults, erosion, and other landscape-altering processes. LiDaR scans can produce up to several billion individual point returns associated with the reflection of a laser from natural and engineered surfaces; these point clouds are typically used to derive a high-resolution digital elevation model (DEM). Currently, there exist only few methods that can support the analysis of the data at full resolution and in the natural 3D perspective in which it was collected by working directly with the points. We are developing new algorithms for extracting features from LiDaR scans, and present method for determining the local curvature of a LiDaR data set, working directly with the individual point returns of a scan. Computing the curvature enables us to rapidly and automatically identify key features such as ridge-lines, stream beds, and edges of terraces. We fit polynomial surface patches via a moving least squares (MLS) approach to local point neighborhoods, determining curvature values for each point. The size of the local point neighborhood is defined by a user. Since both terrestrial and airborne LiDaR scans suffer from high noise, we apply additional pre- and post-processing smoothing steps to eliminate unwanted features. LiDaR data also captures objects like buildings and trees complicating greatly the task of extracting reliable curvature values. Hence, we use a stochastic approach to determine whether a point can be reliably used to estimate curvature or not. Additionally, we have developed a graph-based approach to establish connectivities among points that correspond to regions of high curvature. The result is an explicit description of ridge-lines, for example. We have applied our method to the raw point cloud data collected as part of the GeoEarthScope B-4 project on a section of the San Andreas Fault (Segment SA09). This section provides an excellent test site for our method as it exposes the fault clearly, contains few extraneous structures, and exhibits multiple dry stream-beds that have been off-set by motion on the fault.
Mixed micelle cloud point-magnetic dispersive μ-solid phase extraction of doxazosin and alfuzosin
NASA Astrophysics Data System (ADS)
Gao, Nannan; Wu, Hao; Chang, Yafen; Guo, Xiaozhen; Zhang, Lizhen; Du, Liming; Fu, Yunlong
2015-01-01
Mixed micelle cloud point extraction (MM-CPE) combined with magnetic dispersive μ-solid phase extraction (MD-μ-SPE) has been developed as a new approach for the extraction of doxazosin (DOX) and alfuzosin (ALF) prior to fluorescence analysis. The mixed micelle anionic surfactant sodium dodecyl sulfate and non-ionic polyoxyethylene(7.5)nonylphenylether was used as the extraction solvent in MM-CPE, and diatomite bonding Fe3O4 magnetic nanoparticles were used as the adsorbent in MD-μ-SPE. The method was based on MM-CPE of DOX and ALF in the surfactant-rich phase. Magnetic materials were used to retrieve the surfactant-rich phase, which easily separated from the aqueous phase under magnetic field. At optimum conditions, a linear relationship between DOX and ALF was obtained in the range of 5-300 ng mL-1, and the limits of detection were 0.21 and 0.16 ng mL-1, respectively. The proposed method was successfully applied for the determination of the drugs in pharmaceutical preparations, urine samples, and plasma samples.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Development of a point-kinetic verification scheme for nuclear reactor applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.
In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less
Welding deviation detection algorithm based on extremum of molten pool image contour
NASA Astrophysics Data System (ADS)
Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang
2016-01-01
The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.
Video shot boundary detection using region-growing-based watershed method
NASA Astrophysics Data System (ADS)
Wang, Jinsong; Patel, Nilesh; Grosky, William
2004-10-01
In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.
Semantic Information Extraction of Lanes Based on Onboard Camera Videos
NASA Astrophysics Data System (ADS)
Tang, L.; Deng, T.; Ren, C.
2018-04-01
In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.
Tenax extraction as a simple approach to improve environmental risk assessments.
Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J
2015-07-01
It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
Tie Points Extraction for SAR Images Based on Differential Constraints
NASA Astrophysics Data System (ADS)
Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.
2018-04-01
Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
Comparative analysis of methods for extracting vessel network on breast MRI images
NASA Astrophysics Data System (ADS)
Gaizer, Bence T.; Vassiou, Katerina G.; Lavdas, Eleftherios; Arvanitis, Dimitrios L.; Fezoulidis, Ioannis V.; Glotsos, Dimitris T.
2017-11-01
Digital processing of MRI images aims to provide an automatized diagnostic evaluation of regular health screenings. Cancerous lesions are proven to cause an alteration in the vessel structure of the diseased organ. Currently there are several methods used for extraction of the vessel network in order to quantify its properties. In this work MRI images (Signa HDx 3.0T, GE Healthcare, courtesy of University Hospital of Larissa) of 30 female breasts were subjected to three different vessel extraction algorithms to determine the location of their vascular network. The first method is an experiment to build a graph over known points of the vessel network; the second algorithm aims to determine the direction and diameter of vessels at these points; the third approach is a seed growing algorithm, spreading selection to neighbors of the known vessel pixels. The possibilities shown by the different methods were analyzed, and quantitative measurements were performed. The data provided by these measurements showed no clear correlation with the presence or malignancy of tumors, based on the radiological diagnosis of skilled physicians.
Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Oude Elberink, S.; Vosselman, G.
2016-06-01
Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.
Ulusoy, Halil Ibrahim
2014-01-01
A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.
Sun, Mei; Liu, Guijian; Wu, Qianghua
2013-11-01
A new method was developed for the determination of organic and inorganic selenium in selenium-enriched rice by graphite furnace atomic absorption spectrometry detection after cloud point extraction. Effective separation of organic and inorganic selenium in selenium-enriched rice was achieved by sequentially extracting with water and cyclohexane. Under the optimised conditions, the limit of detection (LOD) was 0.08 μg L(-1), the relative standard deviation (RSD) was 2.1% (c=10.0 μg L(-1), n=11), and the enrichment factor for selenium was 82. Recoveries of inorganic selenium in the selenium-enriched rice samples were between 90.3% and 106.0%. The proposed method was successfully applied for the determination of organic and inorganic selenium as well as total selenium in selenium-enriched rice. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, H. H.; Wang, Y. L.; Ren, G. L.; LI, J. Q.; Gao, T.; Yang, M.; Yang, J. L.
2016-11-01
Remote sensing plays an important role in mineral exploration of “One Belt One Road” plan. One of its applications is extracting and locating hydrothermal alteration zones that are related to mines. At present, the extracting method for alteration anomalies from principal component image mainly relies on the data's normal distribution, without considering the nonlinear characteristics of geological anomaly. In this study, a Fractal Dimension Change Point Model (FDCPM), calculated by the self-similarity and mutability of alteration anomalies, is employed to quantitatively acquire the critical threshold of alteration anomalies. The realization theory and access mechanism of the model are elaborated by an experiment with ASTER data in Beishan mineralization belt, also the results are compared with traditional method (De-Interfered Anomalous Principal Component Thresholding Technique, DIAPCTT). The results show that the findings produced by FDCPM are agree with well with a mounting body of evidence from different perspectives, with the extracting accuracy over 80%, indicating that FDCPM is an effective extracting method for remote sensing alteration anomalies, and could be used as an useful tool for mineral exploration in similar areas in Silk Road Economic Belt.
Extracting the Essential Cartographic Functionality of Programs on the Web
NASA Astrophysics Data System (ADS)
Ledermann, Florian
2018-05-01
Following Aristotle, F. P. Brooks (1987) emphasizes the distinction between "essential difficulties" and "accidental difficulties" as a key challenge in software engineering. From the point of view of cartography, it would be desirable to identify the cartographic essence of a program, and subject it to additional scrutiny, while its accidental proper-ties, again from the point of view of cartography, are usually of lesser relevance to cartographic analysis. In this paper, two methods that facilitate extracting the cartographic essence of programs are presented: close reading of their source code, and the automated analysis of their runtime behavior. The advantages and shortcomings of both methods are discussed, followed by an outlook to future developments and potential applications.
Shyamala, B N; Naidu, M Madhava; Sulochanamma, G; Srinivas, P
2007-09-19
Vanilla extract was prepared by extraction of cured vanilla beans with aqueous ethyl alcohol (60%). The extract was profiled by HPLC, wherein major compounds, viz., vanillic acid, 4-hydroxybenzyl alcohol, 4-hydroxy-3-methoxybenzyl alcohol, 4-hydroxybenzaldehyde and vanillin, could be identified and separated. Extract and pure standard compounds were screened for antioxidant activity using beta-carotene-linoleate and DPPH in vitro model systems. At a concentration of 200 ppm, the extract showed 26% and 43% of antioxidant activity by beta-carotene-linoleate and DPPH methods, respectively, in comparison to corresponding values of 93% and 92% for BHA. Interestingly, 4-hydroxy-3-methoxybenzyl alcohol and 4-hydroxybenzyl alcohol exhibited antioxidant activity of 65% and 45% by beta-carotene-linoleate method and 90% and 50% by DPPH methods, respectively. In contrast, pure vanillin exhibited much lower antioxidant activity. The present study points toward the potential use of vanilla extract components as antioxidants for food preservation and in health supplements as nutraceuticals.
Road extraction from aerial images using a region competition algorithm.
Amo, Miriam; Martínez, Fernando; Torre, Margarita
2006-05-01
In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
First Steps to Automated Interior Reconstruction from Semantically Enriched Point Clouds and Imagery
NASA Astrophysics Data System (ADS)
Obrock, L. S.; Gülch, E.
2018-05-01
The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building. We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2 % and a mean Intersection over Union of 44.2 %. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.
Nong, Chunyan; Niu, Zongliang; Li, Pengyao; Wang, Chunping; Li, Wanyu; Wen, Yingying
2017-04-15
Dual-cloud point extraction (dCPE) was successfully developed for simultaneous extraction of trace sulfonamides (SAs) including sulfamerazine (SMZ), sulfadoxin (SDX), sulfathiazole (STZ) in urine and water samples. Several parameters affecting the extraction were optimized, such as sample pH, concentration of Triton X-114, extraction temperature and time, centrifugation rate and time, back-extraction solution pH, back-extraction temperature and time, back-extraction centrifugation rate and time. High performance liquid chromatography (HPLC) was applied for the SAs analysis. Under the optimum extraction and detection conditions, successful separation of the SAs was achieved within 9min, and excellent analytical performances were attained. Good linear relationships (R 2 ≥0.9990) between peak area and concentration for SMZ and STZ were optimized from 0.02 to 10μg/mL, for SDX from 0.01 to 10μg/mL. Detection limits of 3.0-6.2ng/mL were achieved. Satisfactory recoveries ranging from 85 to 108% were determined with urine, lake and tap water spiked at 0.2, 0.5 and 1μg/mL, respectively, with relative standard deviations (RSDs, n=6) of 1.5-7.7%. This method was demonstrated to be convenient, rapid, cost-effective and environmentally benign, and could be used as an alternative tool to existing methods for analysing trace residues of SAs in urine and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods
NASA Astrophysics Data System (ADS)
Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.
2015-03-01
The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
Classification of Aerial Photogrammetric 3d Point Clouds
NASA Astrophysics Data System (ADS)
Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.
2017-05-01
We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
Wu, Yongjiang; Jin, Ye; Ding, Haiying; Luan, Lianjun; Chen, Yong; Liu, Xuesong
2011-09-01
The application of near-infrared (NIR) spectroscopy for in-line monitoring of extraction process of scutellarein from Erigeron breviscapus (vant.) Hand-Mazz was investigated. For NIR measurements, two fiber optic probes designed to transmit NIR radiation through a 2 mm pathlength flow cell were utilized to collect spectra in real-time. High performance liquid chromatography (HPLC) was used as a reference method to determine scutellarein in extract solution. Partial least squares regression (PLSR) calibration model of Savitzky-Golay smoothing NIR spectra in the 5450-10,000 cm(-1) region gave satisfactory predictive results for scutellarein. The results showed that the correlation coefficients of calibration and cross validation were 0.9967 and 0.9811, respectively, and the root mean square error of calibration and cross validation were 0.044 and 0.105, respectively. Furthermore, both the moving block standard deviation (MBSD) method and conformity test were used to identify the end point of extraction process, providing real-time data and instant feedback about the extraction course. The results obtained in this study indicated that the NIR spectroscopy technique provides an efficient and environmentally friendly approach for fast determination of scutellarein and end point control of extraction process. Copyright © 2011 Elsevier B.V. All rights reserved.
NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.
Qian, Xiangfei; Ye, Cang
2014-12-01
This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.
[Extraction and recognition of attractors in three-dimensional Lorenz plot].
Hu, Min; Jang, Chengfan; Wang, Suxia
2018-02-01
Lorenz plot (LP) method which gives a global view of long-time electrocardiogram signals, is an efficient simple visualization tool to analyze cardiac arrhythmias, and the morphologies and positions of the extracted attractors may reveal the underlying mechanisms of the onset and termination of arrhythmias. But automatic diagnosis is still impossible because it is lack of the method of extracting attractors by now. We presented here a methodology of attractor extraction and recognition based upon homogeneously statistical properties of the location parameters of scatter points in three dimensional LP (3DLP), which was constructed by three successive RR intervals as X , Y and Z axis in Cartesian coordinate system. Validation experiments were tested in a group of RR-interval time series and tags data with frequent unifocal premature complexes exported from a 24-hour Holter system. The results showed that this method had excellent effective not only on extraction of attractors, but also on automatic recognition of attractors by the location parameters such as the azimuth of the points peak frequency ( A PF ) of eccentric attractors once stereographic projection of 3DLP along the space diagonal. Besides, A PF was still a powerful index of differential diagnosis of atrial and ventricular extrasystole. Additional experiments proved that this method was also available on several other arrhythmias. Moreover, there were extremely relevant relationships between 3DLP and two dimensional LPs which indicate any conventional achievement of LPs could be implanted into 3DLP. It would have a broad application prospect to integrate this method into conventional long-time electrocardiogram monitoring and analysis system.
NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation
Qian, Xiangfei; Ye, Cang
2015-01-01
This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605
Extraction of fault component from abnormal sound in diesel engines using acoustic signals
NASA Astrophysics Data System (ADS)
Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou
2016-06-01
In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.
Zhou, Jun; Sun, Jiang Bing; Xu, Xin Yu; Cheng, Zhao Hui; Zeng, Ping; Wang, Feng Qiao; Zhang, Qiong
2015-03-25
A simple, inexpensive and efficient method based on the mixed cloud point extraction (MCPE) combined with high performance liquid chromatography was developed for the simultaneous separation and determination of six flavonoids (rutin, hyperoside, quercetin-3-O-sophoroside, isoquercitrin, astragalin and quercetin) in Apocynum venetum leaf samples. The non-ionic surfactant Genapol X-080 and cetyl-trimethyl ammonium bromide (CTAB) was chosen as the mixed extracting solvent. Parameters that affect the MCPE processes, such as the content of Genapol X-080 and CTAB, pH, salt content, extraction temperature and time were investigated and optimized. Under the optimized conditions, the calibration curve for six flavonoids were all linear with the correlation coefficients greater than 0.9994. The intra-day and inter-day precision (RSD) were below 8.1% and the limits of detection (LOD) for the six flavonoids were 1.2-5.0 ng mL(-1) (S/N=3). The proposed method was successfully used to separate and determine the six flavonoids in A. venetum leaf samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Active surface model improvement by energy function optimization for 3D segmentation.
Azimifar, Zohreh; Mohaddesi, Mahsa
2015-04-01
This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Zhipeng; Lei, Lin; Zhou, Shilin
2015-10-01
Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.
The theory behind the full scattering profile
NASA Astrophysics Data System (ADS)
Feder, Idit; Duadi, Hamootal; Fixler, Dror
2018-02-01
Optical methods for extracting properties of tissues are commonly used. These methods are non-invasive, cause no harm to the patient and are characterized by high speed. The human tissue is a turbid media hence it poses a challenge for different optical methods. In addition the analysis of the emitted light requires calibration for achieving accuracy information. Most of the methods analyze the reflected light based on their phase and amplitude or the transmitted light. We suggest a new optical method for extracting optical properties of cylindrical tissues based on their full scattering profile (FSP), which means the angular distribution of the reemitted light. The FSP of cylindrical tissues is relevant for biomedical measurement of fingers, earlobes or pinched tissues. We found the iso-pathlength (IPL) point, a point on the surface of the cylinder medium where the light intensity remains constant and does not depend on the reduced scattering coefficient of the medium, but rather depends on the spatial structure and the cylindrical geometry. However, a similar behavior was also previously reported in reflection from a semi-infinite medium. Moreover, we presented a linear dependency between the radius of the tissue and the point's location. This point can be used as a self-calibration point and thus improve the accuracy of optical tissue measurements. This natural phenomenon has not been investigated before. We show this phenomenon theoretically, based on the diffusion theory, which is supported by our simulation results using Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
Characterization of rice starch and protein obtained by a fast alkaline extraction method.
Souza, Daiana de; Sbardelotto, Arthur Francisco; Ziegler, Denize Righetto; Marczak, Ligia Damasceno Ferreira; Tessaro, Isabel Cristina
2016-01-15
This study evaluated the characteristics of rice starch and protein obtained by a fast alkaline extraction method on rice flour (RF) derived from broken rice. The extraction was conducted using 0.18% NaOH at 30°C for 30min followed by centrifugation to separate the starch rich and the protein rich fractions. This fast extraction method allowed to obtain an isoelectric precipitation protein concentrate (IPPC) with 79% protein and a starchy product with low protein content. The amino acid content of IPPC was practically unchanged compared to the protein in RF. The proteins of the IPPC underwent denaturation during extraction and some of the starch suffered the cold gelatinization phenomenon, due to the alkaline treatment. With some modifications, the fast method can be interesting in a technological point of view as it enables process cost reduction and useful ingredients obtention to the food and chemical industries. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maulina, Hervin; Santoso, Iman, E-mail: iman.santoso@ugm.ac.id; Subama, Emmistasega
2016-04-19
The extraction of the dielectric constant of nanostructured graphene on SiC substrates from spectroscopy ellipsometry measurement using the Gauss-Newton inversion (GNI) method has been done. This study aims to calculate the dielectric constant and refractive index of graphene by extracting the value of ψ and Δ from the spectroscopy ellipsometry measurement using GNI method and comparing them with previous result which was extracted using Drude-Lorentz (DL) model. The results show that GNI method can be used to calculate the dielectric constant and refractive index of nanostructured graphene on SiC substratesmore faster as compared to DL model. Moreover, the imaginary partmore » of the dielectric constant values and coefficient of extinction drastically increases at 4.5 eV similar to that of extracted using known DL fitting. The increase is known due to the process of interband transition and the interaction between the electrons and electron-hole at M-points in the Brillouin zone of graphene.« less
Gürkan, Ramazan; Korkmaz, Sema; Altunay, Nail
2016-08-01
A new ultrasonic-thermostatic-assisted cloud point extraction procedure (UTA-CPE) was developed for preconcentration at the trace levels of vanadium (V) and molybdenum (Mo) in milk, vegetables and foodstuffs prior to determination via flame atomic absorption spectrometry (FAAS). The method is based on the ion-association of stable anionic oxalate complexes of V(V) and Mo(VI) with [9-(diethylamino)benzo[a]phenoxazin-5-ylidene]azanium; sulfate (Nile blue A) at pH 4.5, and then extraction of the formed ion-association complexes into micellar phase of polyoxyethylene(7.5)nonylphenyl ether (PONPE 7.5). The UTA-CPE is greatly simplified and accelerated compared to traditional cloud point extraction (CPE). The analytical parameters optimized are solution pH, the concentrations of complexing reagents (oxalate and Nile blue A), the PONPE 7.5 concentration, electrolyte concentration, sample volume, temperature and ultrasonic power. Under the optimum conditions, the calibration curves for Mo(VI) and V(V) are obtained in the concentration range of 3-340µgL(-1) and 5-250µgL(-1) with high sensitivity enhancement factors (EFs) of 145 and 115, respectively. The limits of detection (LODs) for Mo(VI) and V(V) are 0.86 and 1.55µgL(-1), respectively. The proposed method demonstrated good performances such as relative standard deviations (as RSD %) (≤3.5%) and spiked recoveries (95.7-102.3%). The accuracy of the method was assessed by analysis of two standard reference materials (SRMs) and recoveries of spiked solutions. The method was successfully applied into the determination of trace amounts of Mo(VI) and V(V) in milk, vegetables and foodstuffs with satisfactory results. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Dumitrescu, Alina V.; van Ginneken, Bram; Abrámoff, Michael D.
2011-03-01
Parameters extracted from the vasculature on the retina are correlated with various conditions such as diabetic retinopathy and cardiovascular diseases such as stroke. Segmentation of the vasculature on the retina has been a topic that has received much attention in the literature over the past decade. Analysis of the segmentation result, however, has only received limited attention with most works describing methods to accurately measure the width of the vessels. Analyzing the connectedness of the vascular network is an important step towards the characterization of the complete vascular tree. The retinal vascular tree, from an image interpretation point of view, originates at the optic disc and spreads out over the retina. The tree bifurcates and the vessels also cross each other. The points where this happens form the key to determining the connectedness of the complete tree. We present a supervised method to detect the bifurcations and crossing points of the vasculature of the retina. The method uses features extracted from the vasculature as well as the image in a location regression approach to find those locations of the segmented vascular tree where the bifurcation or crossing occurs (from here, POI, points of interest). We evaluate the method on the publicly available DRIVE database in which an ophthalmologist has marked the POI.
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
NASA Astrophysics Data System (ADS)
Farahmand, Farnaz; Ghasemzadeh, Bahar; Naseri, Abdolhossein
2018-01-01
An air assisted liquid-liquid microextraction by applying the solidification of a floating organic droplet method (AALLME-SFOD) coupled with a multivariate calibration method, namely partial least squares (PLS), was introduced for the fast and easy determination of Atenolol (ATE), Propanolol (PRO) and Carvedilol (CAR) in biological samples via a spectrophotometric approach. The analytes would be extracted from neutral aqueous solution into 1-dodecanol as an organic solvent, using AALLME. In this approach a low-density solvent with a melting point close to room temperature was applied as the extraction solvent. The emulsion was immediately formed by repeatedly pulling in and pushing out the aqueous sample solution and extraction solvent mixture via a 10-mL glass syringe for ten times. After centrifugation, the extractant droplet could be simply collected from the aqueous samples by solidifying the emulsion at a lower than the melting point temperature. In the next step, analytes were back extracted simultaneously into the acidic aqueous solution. Derringer and Suich multi-response optimization were utilized for simultaneous optimizing the parameters of three analytes. This method incorporates the benefits of AALLME and dispersive liquid-liquid microextraction considering the solidification of floating organic droplets (DLLME-SFOD). Calibration graphs under optimized conditions were linear in the range of 0.30-6.00, 0.32-2.00 and 0.30-1.40 μg mL- 1 for ATE, CAR and PRO, respectively. Other analytical parameters were obtained as follows: enrichment factors (EFs) were found to be 11.24, 16.55 and 14.90, and limits of detection (LODs) were determined to be 0.09, 0.10 and 0.08 μg mL- 1 for ATE, CAR and PRO, respectively. The proposed method will require neither a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly.
Farahmand, Farnaz; Ghasemzadeh, Bahar; Naseri, Abdolhossein
2018-01-05
An air assisted liquid-liquid microextraction by applying the solidification of a floating organic droplet method (AALLME-SFOD) coupled with a multivariate calibration method, namely partial least squares (PLS), was introduced for the fast and easy determination of Atenolol (ATE), Propanolol (PRO) and Carvedilol (CAR) in biological samples via a spectrophotometric approach. The analytes would be extracted from neutral aqueous solution into 1-dodecanol as an organic solvent, using AALLME. In this approach a low-density solvent with a melting point close to room temperature was applied as the extraction solvent. The emulsion was immediately formed by repeatedly pulling in and pushing out the aqueous sample solution and extraction solvent mixture via a 10-mL glass syringe for ten times. After centrifugation, the extractant droplet could be simply collected from the aqueous samples by solidifying the emulsion at a lower than the melting point temperature. In the next step, analytes were back extracted simultaneously into the acidic aqueous solution. Derringer and Suich multi-response optimization were utilized for simultaneous optimizing the parameters of three analytes. This method incorporates the benefits of AALLME and dispersive liquid-liquid microextraction considering the solidification of floating organic droplets (DLLME-SFOD). Calibration graphs under optimized conditions were linear in the range of 0.30-6.00, 0.32-2.00 and 0.30-1.40μg mL -1 for ATE, CAR and PRO, respectively. Other analytical parameters were obtained as follows: enrichment factors (EFs) were found to be 11.24, 16.55 and 14.90, and limits of detection (LODs) were determined to be 0.09, 0.10 and 0.08μg mL -1 for ATE, CAR and PRO, respectively. The proposed method will require neither a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic facial animation parameters extraction in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yang, Chenggen; Gong, Wanwei; Yu, Lu
2002-01-01
Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Zhao, Jiao; Lu, Yunhui; Fan, Chongyang; Wang, Jun; Yang, Yaling
2015-02-05
A novel and simple method for the sensitive determination of trace amounts of nitrite in human urine and blood has been developed by combination of cloud point extraction (CPE) and microplate assay. The method is based on the Griess reaction and the reaction product is extracted into nonionic surfactant Triton-X114 using CPE technique. In this study, decolorization treatment of urine and blood was applied to overcome the interference of matrix and enhance the sensitivity of nitrite detection. Multi-sample can be simultaneously detected thanks to a 96-well microplate technique. The effects of different operating parameters such as type of decolorizing agent, concentration of surfactant (Triton X-114), addition of (NH4)2SO4, extraction temperature and time, interfering elements were studied and optimum conditions were obtained. Under the optimum conditions, a linear calibration graph was obtained in the range of 10-400 ng mL(-1) of nitrite with limit of detection (LOD) of 2.5 ng mL(-1). The relative standard deviation (RSD) for determination of 100 ng mL(-1) of nitrite was 2.80%. The proposed method was successfully applied for the determination of nitrite in the urine and blood samples with recoveries of 92.6-101.2%. Copyright © 2014 Elsevier B.V. All rights reserved.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Yang, Yanxi
2018-05-01
We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.
Khan, Sumaira; Kazi, Tasneem G; Baig, Jameel A; Kolachi, Nida F; Afridi, Hassan I; Wadhwa, Sham Kumar; Shah, Abdul Q; Kandhro, Ghulam A; Shah, Faheem
2010-10-15
A cloud point extraction (CPE) method has been developed for the determination of trace quantity of vanadium ions in pharmaceutical formulations (PF), dialysate (DS) and parenteral solutions (PS). The CPE of vanadium (V) using 8-hydroxyquinoline (oxine) as complexing reagent and mediated by nonionic surfactant (Triton X-114) was investigated. The parameters that affect the extraction efficiency of CPE, such as pH of sample solution, concentration of oxine and Triton X-114, equilibration temperature and time period for shaking were investigated in detail. The validity of CPE of V was checked by standard addition method in real samples. The extracted surfactant-rich phase was diluted with nitric acid in ethanol, prior to subjecting electrothermal atomic absorption spectrometry. Under these conditions, the preconcentration of 50 mL sample solutions, allowed raising an enrichment factor of 125-fold. The lower limit of detection obtained under the optimal conditions was 42 ng/L. The proposed method has been successfully applied to the determination of trace quantity of V in various pharmaceutical preparations with satisfactory results. The concentration ranges of V in PF, DS and PS samples were found in the range of 10.5-15.2, 0.65-1.32 and 1.76-6.93 microg/L, respectively. 2010 Elsevier B.V. All rights reserved.
Multi-Scale Voxel Segmentation for Terrestrial Lidar Data within Marshes
NASA Astrophysics Data System (ADS)
Nguyen, C. T.; Starek, M. J.; Tissot, P.; Gibeaut, J. C.
2016-12-01
The resilience of marshes to a rising sea is dependent on their elevation response. Terrestrial laser scanning (TLS) is a detailed topographic approach for accurate, dense surface measurement with high potential for monitoring of marsh surface elevation response. The dense point cloud provides a 3D representation of the surface, which includes both terrain and non-terrain objects. Extraction of topographic information requires filtering of the data into like-groups or classes, therefore, methods must be incorporated to identify structure in the data prior to creation of an end product. A voxel representation of three-dimensional space provides quantitative visualization and analysis for pattern recognition. The objectives of this study are threefold: 1) apply a multi-scale voxel approach to effectively extract geometric features from the TLS point cloud data, 2) investigate the utility of K-means and Self Organizing Map (SOM) clustering algorithms for segmentation, and 3) utilize a variety of validity indices to measure the quality of the result. TLS data were collected at a marsh site along the central Texas Gulf Coast using a Riegl VZ 400 TLS. The site consists of both exposed and vegetated surface regions. To characterize structure of the point cloud, octree segmentation is applied to create a tree data structure of voxels containing the points. The flexibility of voxels in size and point density makes this algorithm a promising candidate to locally extract statistical and geometric features of the terrain including surface normal and curvature. The characteristics of the voxel itself such as the volume and point density are also computed and assigned to each point as are laser pulse characteristics. The features extracted from the voxelization are then used as input for clustering of the points using the K-means and SOM clustering algorithms. Optimal number of clusters are then determined based on evaluation of cluster separability criterions. Results for different combinations of the feature space vector and differences between K-means and SOM clustering will be presented. The developed method provides a novel approach for compressing TLS scene complexity in marshes, such as for vegetation biomass studies or erosion monitoring.
Roads Data Conflation Using Update High Resolution Satellite Images
NASA Astrophysics Data System (ADS)
Abdollahi, A.; Riyahi Bakhtiari, H. R.
2017-11-01
Urbanization, industrialization and modernization are rapidly growing in developing countries. New industrial cities, with all the problems brought on by rapid population growth, need infrastructure to support the growth. This has led to the expansion and development of the road network. A great deal of road network data has made by using traditional methods in the past years. Over time, a large amount of descriptive information has assigned to these map data, but their geometric accuracy and precision is not appropriate to today's need. In this regard, the improvement of the geometric accuracy of road network data by preserving the descriptive data attributed to them and updating of the existing geo databases is necessary. Due to the size and extent of the country, updating the road network maps using traditional methods is time consuming and costly. Conversely, using remote sensing technology and geographic information systems can reduce costs, save time and increase accuracy and speed. With increasing the availability of high resolution satellite imagery and geospatial datasets there is an urgent need to combine geographic information from overlapping sources to retain accurate data, minimize redundancy, and reconcile data conflicts. In this research, an innovative method for a vector-to-imagery conflation by integrating several image-based and vector-based algorithms presented. The SVM method for image classification and Level Set method used to extract the road the different types of road intersections extracted from imagery using morphological operators. For matching the extracted points and to find the corresponding points, matching function which uses the nearest neighborhood method was applied. Finally, after identifying the matching points rubber-sheeting method used to align two datasets. Two residual and RMSE criteria used to evaluate accuracy. The results demonstrated excellent performance. The average root-mean-square error decreased from 11.8 to 4.1 m.
A portable foot-parameter-extracting system
NASA Astrophysics Data System (ADS)
Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan
2016-03-01
In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.
Chen, Miao; Xia, Qinghai; Liu, Mousheng; Yang, Yaling
2011-01-01
A cloud-point extraction (CPE) method using Triton X-114 (TX-114) nonionic surfactant was developed for the extraction and preconcentration of propyl gallate (PG), tertiary butyl hydroquinone (TBHQ), butylated hydroxyanisole (BHA), and butylated hydroxytoluene (BHT) from edible oils. The optimum conditions of CPE were 2.5% (v/v) TX-114, 0.5% (w/v) NaCl and 40 min equilibration time at 50 °C. The surfactant-rich phase was then analyzed by reversed-phase high-performance liquid chromatography with ultraviolet detection at 280 nm, using a gradient mobile phase consisting of methanol and 1.5% (v/v) acetic acid. Under the studied conditions, 4 synthetic phenolic antioxidants (SPAs) were successfully separated within 24 min. The limits of detection (LOD) were 1.9 ng mL(-1) for PG, 11 ng mL(-1) for TBHQ, 2.3 ng mL(-1) for BHA, and 5.9 ng mL(-1) for BHT. Recoveries of the SPAs spiked into edible oil were in the range 81% to 88%. The CPE method was shown to be potentially useful for the preconcentration of the target analytes, with a preconcentration factor of 14. Moreover, the method is simple, has high sensitivity, consumes much less solvent than traditional methods, and is environment-friendly. Practical Application: The method established in this article uses less organic solvent to extract SPAs from edible oils; it is simple, highly sensitive and results in no pollution to the environment.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
NASA Astrophysics Data System (ADS)
Hooshyar, Milad; Wang, Dingbao; Kim, Seoyoung; Medeiros, Stephen C.; Hagen, Scott C.
2016-10-01
A method for automatic extraction of valley and channel networks from high-resolution digital elevation models (DEMs) is presented. This method utilizes both positive (i.e., convergent topography) and negative (i.e., divergent topography) curvature to delineate the valley network. The valley and ridge skeletons are extracted using the pixels' curvature and the local terrain conditions. The valley network is generated by checking the terrain for the existence of at least one ridge between two intersecting valleys. The transition from unchannelized to channelized sections (i.e., channel head) in each first-order valley tributary is identified independently by categorizing the corresponding contours using an unsupervised approach based on k-means clustering. The method does not require a spatially constant channel initiation threshold (e.g., curvature or contributing area). Moreover, instead of a point attribute (e.g., curvature), the proposed clustering method utilizes the shape of contours, which reflects the entire cross-sectional profile including possible banks. The method was applied to three catchments: Indian Creek and Mid Bailey Run in Ohio and Feather River in California. The accuracy of channel head extraction from the proposed method is comparable to state-of-the-art channel extraction methods.
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki
This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.
NASA Astrophysics Data System (ADS)
Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue
2012-10-01
The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.
The Detection of Transport Land-Use Data Using Crowdsourcing Taxi Trajectory
NASA Astrophysics Data System (ADS)
Ai, T.; Yang, W.
2016-06-01
This study tries to explore the question of transport land-use change detection by large volume of vehicle trajectory data, presenting a method based on Deluanay triangulation. The whole method includes three steps. The first one is to pre-process the vehicle trajectory data including the point anomaly removing and the conversion of trajectory point to track line. Secondly, construct Deluanay triangulation within the vehicle trajectory line to detect neighborhood relation. Considering the case that some of the trajectory segments are too long, we use a interpolation measure to add more points for the improved triangulation. Thirdly, extract the transport road by cutting short triangle edge and organizing the polygon topology. We have conducted the experiment of transport land-use change discovery using the data of taxi track in Beijing City. We extract not only the transport land-use area but also the semantic information such as the transformation speed, the traffic jam distribution, the main vehicle movement direction and others. Compared with the existed transport network data, such as OpenStreet Map, our method is proved to be quick and accurate.
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Madej, Katarzyna; Persona, Karolina; Wandas, Monika; Gomółka, Ewa
2013-10-18
A complex extraction system with the use of cloud-point extraction technique (CPE) was developed for sequential isolation of basic and acidic/neutral medicaments from human plasma/serum, screened by HPLC/DAD method. Eight model drugs (paracetamol, promazine, chlorpromazine, amitriptyline, salicyclic acid, opipramol, alprazolam and carbamazepine) were chosen for the study of optimal CPE conditions. The CPE technique consists in partition of an aqueous sample with addition of a surfactant into two phases: micelle-rich phase with the isolated compounds and water phase containing a surfactant below the critical micellar concentration, mainly under influence of temperature change. The proposed extraction system consists of two chief steps: isolation of basic compounds (from pH 12) and then isolation of acidic/neutral compounds (from pH 6) using surfactant Triton X-114 as the extraction medium. Extraction recovery varied from 25.2 to 107.9% with intra-day and inter-day precision (RSD %) ranged 0.88-1087 and 5.32-17.96, respectively. The limits of detection for the studied medicaments at λ 254nm corresponded to therapeutic or low toxic plasma concentration levels. Usefulness of the proposed CPE-HPLC/DAD method for toxicological drug screening was tested via its application to analysis of two serum samples taken from patients suspected of drug overdosing. Published by Elsevier B.V.
Duan, Zhugeng; Zhao, Dan; Zeng, Yuan; Zhao, Yujin; Wu, Bingfang; Zhu, Jianjun
2015-01-01
Topography affects forest canopy height retrieval based on airborne Light Detection and Ranging (LiDAR) data a lot. This paper proposes a method for correcting deviations caused by topography based on individual tree crown segmentation. The point cloud of an individual tree was extracted according to crown boundaries of isolated individual trees from digital orthophoto maps (DOMs). Normalized canopy height was calculated by subtracting the elevation of centres of gravity from the elevation of point cloud. First, individual tree crown boundaries are obtained by carrying out segmentation on the DOM. Second, point clouds of the individual trees are extracted based on the boundaries. Third, precise DEM is derived from the point cloud which is classified by a multi-scale curvature classification algorithm. Finally, a height weighted correction method is applied to correct the topological effects. The method is applied to LiDAR data acquired in South China, and its effectiveness is tested using 41 field survey plots. The results show that the terrain impacts the canopy height of individual trees in that the downslope side of the tree trunk is elevated and the upslope side is depressed. This further affects the extraction of the location and crown of individual trees. A strong correlation was detected between the slope gradient and the proportions of returns with height differences more than 0.3, 0.5 and 0.8 m in the total returns, with coefficient of determination R2 of 0.83, 0.76, and 0.60 (n = 41), respectively. PMID:26016907
Improving Visibility of Stereo-Radiographic Spine Reconstruction with Geometric Inferences.
Kumar, Sampath; Nayak, K Prabhakar; Hareesha, K S
2016-04-01
Complex deformities of the spine, like scoliosis, are evaluated more precisely using stereo-radiographic 3D reconstruction techniques. Primarily, it uses six stereo-corresponding points available on the vertebral body for the 3D reconstruction of each vertebra. The wireframe structure obtained in this process has poor visualization, hence difficult to diagnose. In this paper, a novel method is proposed to improve the visibility of this wireframe structure using a deformation of a generic spine model in accordance with the 3D-reconstructed corresponding points. Then, the geometric inferences like vertebral orientations are automatically extracted from the radiographs to improve the visibility of the 3D model. Biplanar radiographs are acquired from five scoliotic subjects on a specifically designed calibration bench. The stereo-corresponding point reconstruction method is used to build six-point wireframe vertebral structures and thus the entire spine model. Using the 3D spine midline and automatically extracted vertebral orientation features, a more realistic 3D spine model is generated. To validate the method, the 3D spine model is back-projected on biplanar radiographs and the error difference is computed. Though, this difference is within the error limits available in the literature, the proposed work is simple and economical. The proposed method does not require more corresponding points and image features to improve the visibility of the model. Hence, it reduces the computational complexity. Expensive 3D digitizer and vertebral CT scan models are also excluded from this study. Thus, the visibility of stereo-corresponding point reconstruction is improved to obtain a low-cost spine model for a better diagnosis of spinal deformities.
Pinto, Edgar; Almeida, Agostinho A; Ferreira, Isabel M P L V O
2015-03-01
The influence of soil properties on the phytoavailability of metal(loid)s in a soil-plant system was evaluated. The content of extractable metal(loid)s obtained by using different extraction methods was also compared. To perform this study, a test plant (Lactuca sativa) and rhizosphere soil were sampled at 5 different time points (2, 4, 6, 8 and 10 weeks of plant growth). Four extraction methods (Mehlich 3, DTPA, NH4NO3 and CaCl2) were used. Significant positive correlations between the soil extractable content and lettuce shoot content were obtained for several metal(loid)s. The extraction with NH4NO3 showed the higher number of strong positive correlations indicating the suitability of this method to estimate metal(loid)s phytoavailability. The soil CEC, OM, pH, texture and oxides content significantly influenced the distribution of metal(loid)s between the phytoavailable and non-phytoavailable fractions. A reliable prediction model for Cr, V, Ni, As, Pb, Co, Cd, and Sb phytoavailability was obtained considering the amount of metal(loid) extracted by the NH4NO3 method and the main soil properties. This work shows that the analysis of rhizosphere soil by single extractions methods is a reliable approach to estimate metal(loid)s phytoavailability. Copyright © 2014 Elsevier Inc. All rights reserved.
Giebułtowicz, Joanna; Kojro, Grzegorz; Piotrowski, Roman; Kułakowski, Piotr; Wroczyński, Piotr
2016-09-05
Cloud-point extraction (CPE) is attracting increasing interest in a number of analytical fields, including bioanalysis, as it provides a simple, safe and environmentally-friendly sample preparation technique. However, there are only few reports on the application of this extraction technique in liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) analysis. In this study, CPE was used for the isolation of antazoline from human plasma. To date, only one method of antazoline isolation from plasma exists-liquid-liquid extraction (LLE). The aim of this study was to prove the compatibility of CPE and LC-ESI-MS/MS and the applicability of CPE to the determination of antazoline in spiked human plasma and clinical samples. Antazoline was isolated from human plasma using Triton X-114 as a surfactant. Xylometazoline was used as an internal standard. NaOH concentration, temperature and Triton X-114 concentration were optimized. The absolute matrix effect was carefully investigated. All validation experiments met international acceptance criteria and no significant relative matrix effect was observed. The compatibility of CPE and LC-ESI-MS/MS was confirmed using clinical plasma samples. The determination of antazoline concentration in human plasma in the range 10-2500ngmL(-1) by the CPE method led to results which are equivalent to those obtained by the widely used liquid-liquid extraction method. Copyright © 2016 Elsevier B.V. All rights reserved.
Morphological Feature Extraction for Automatic Registration of Multispectral Images
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
Highway extraction from high resolution aerial photography using a geometric active contour model
NASA Astrophysics Data System (ADS)
Niu, Xutong
Highway extraction and vehicle detection are two of the most important steps in traffic-flow analysis from multi-frame aerial photographs. The traditional method of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs, which is tedious and time-consuming. This research presents a new framework for semi-automatic highway extraction. The basis of the new framework is an improved geometric active contour (GAC) model. This novel model seeks to minimize an objective function that transforms a problem of propagation of regular curves into an optimization problem. The implementation of curve propagation is based on level set theory. By using an implicit representation of a two-dimensional curve, a level set approach can be used to deal with topological changes naturally, and the output is unaffected by different initial positions of the curve. However, the original GAC model, on which the new model is based, only incorporates boundary information into the curve propagation process. An error-producing phenomenon called leakage is inevitable wherever there is an uncertain weak edge. In this research, region-based information is added as a constraint into the original GAC model, thereby, giving this proposed method the ability of integrating both boundary and region-based information during the curve propagation. Adding the region-based constraint eliminates the leakage problem. This dissertation applies the proposed augmented GAC model to the problem of highway extraction from high-resolution aerial photography. First, an optimized stopping criterion is designed and used in the implementation of the GAC model. It effectively saves processing time and computations. Second, a seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. A seed point is usually placed at an end node of highway segments close to the boundary of the image or at a position where possible blocking may occur, such as at an overpass bridge or near vehicle crowds. These seed points can be automatically propagated throughout the entire highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction from a large orthophoto mosaic. In the process, vehicles on the highway extracted from mosaic were detected with an 83% success rate.
Method for separating water soluble organics from a process stream by aqueous biphasic extraction
Chaiko, David J.; Mego, William A.
1999-01-01
A method for separating water-miscible organic species from a process stream by aqueous biphasic extraction is provided. An aqueous biphase system is generated by contacting a process stream comprised of water, salt, and organic species with an aqueous polymer solution. The organic species transfer from the salt-rich phase to the polymer-rich phase, and the phases are separated. Next, the polymer is recovered from the loaded polymer phase by selectively extracting the polymer into an organic phase at an elevated temperature, while the organic species remain in a substantially salt-free aqueous solution. Alternatively, the polymer is recovered from the loaded polymer by a temperature induced phase separation (cloud point extraction), whereby the polymer and the organic species separate into two distinct solutions. The method for separating water-miscible organic species is applicable to the treatment of industrial wastewater streams, including the extraction and recovery of complexed metal ions from salt solutions, organic contaminants from mineral processing streams, and colorants from spent dye baths.
Han, Quan; Huo, Yanyan; Wu, Jiangyan; He, Yaping; Yang, Xiaohui; Yang, Longhu
2017-03-24
A highly sensitive method based on cloud point extraction (CPE) separation/preconcentration and graphite furnace atomic absorption spectrometry (GFAAS) detection has been developed for the determination of ultra-trace amounts of rhodium in water samples. A new reagent, 2-(5-iodo-2-pyridylazo)-5-dimethylaminoaniline (5-I-PADMA), was used as the chelating agent and the nonionic surfactant TritonX-114 was chosen as extractant. In a HAc-NaAc buffer solution at pH 5.5, Rh(III) reacts with 5-I-PADMA to form a stable chelate by heating in a boiling water bath for 10 min. Subsequently, the chelate is extracted into the surfactant phase and separated from bulk water. The factors affecting CPE were investigated. Under the optimized conditions, the calibration graph was linear in the range of 0.1-6.0 ng/mL, the detection limit was 0.023 ng/mL for rhodium and relative standard deviation was 3.67% ( c = 1.0 ng/mL, n = 11).The method has been applied to the determination of trace rhodium in water samples with satisfactory results.
Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.
Youji Feng; Lixin Fan; Yihong Wu
2016-01-01
The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
Development of PAOT tool kit for work improvements in clinical nursing.
Jung, Moon-Hee
2014-01-01
The aim of this study was to develop an action checklist for educational training of clinical nurses. The study used qualitative and quantitative methods. Questionnaire items were extracted through in-depth interviews and a questionnaire survey. PASW version 19 and AMOS version 19 were used for data analyses. Reliability and validity were tested with both exploratory and confirmative factor analysis. The levels of the indicators related to goodness-of-fit were acceptable. Thus, a model kit of work improvements in clinical nursing was developed. It comprises 5 domains (16 action points): health promotion (5 action points), work management (3 action points), ergonomic work methods (3 action points), managerial policies and mutual support among staff members (3 action points), and welfare in the work area (2 action points).
NASA Astrophysics Data System (ADS)
Mansor, Che Nurul Ain Nadirah Che; Latip, Jalifah; Markom, Masturah
2016-11-01
Orthosiphon stamineus is one of the important herbal plants used in folk medicine to cure variety of diseases. Three compounds namely rosmarinic acid (RA), sinensetin (SEN) and eupatorin (EUP) were identified as the bioactive markers. However, a standardized extraction method for the preparation of O. stamineus extract enriched with the bioactive compounds was still undiscovered. Thus, this study aims to establish the optimal extraction method that can be used to prepare the enriched extract with anti-oxidant property. Maceration, reflux and Soxhlet were the extraction methods employed, with ethanol, 50% (v/v) aqueous ethanol and water were chosen as the solvents. Each extracts were evaluated for their biomarker contents (RA, SEN and EUP) and anti-oxidant capacity using thin layer chromatography (TLC) and 2,2-diphenyl-1-picrylhydrazyl (DPPH) scavenging assay respectively. Among the three extraction methods employed, the highest total extraction yield was obtained from reflux (72.73%) followed by Soxhlet (62.51%) and maceration (37.78%). Although all extracts found to contain the three biomarkers via TLC visualization analysis, there was variation in the extracts' anti-oxidant capacity, ranging from 6.17% to 72.97%. The variation was expected to be due to the difference in the quantity of the biomarkers in each extract. Furthermore, the anti-oxidative potency of the RA was found comparable to natural anti-oxidant vitamin C, and higher than the synthetic anti-oxidant butylated hydroxyl toluene (BHT). These preliminary results may serve as a starting point towards the preparation of standardized bioactive O. stamineus extract.
NASA Astrophysics Data System (ADS)
Li, Lin; Li, Dalin; Zhu, Haihong; Li, You
2016-10-01
Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.
An Overview on Perception and Its Principles from Avicenna's Point of View
ERIC Educational Resources Information Center
Soltani, Ali Reza
2015-01-01
The main purpose this paper attempts to reach is to recognize principles of perception, its dimensions and types from Avicenna's point of view. This study is a qualitative study conducted using descriptive-analytical methods. Resources are first reviewed and principles of perception along with its process are extracted from his perspective.…
40 CFR 435.11 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...
40 CFR 435.11 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...
40 CFR 435.11 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...
Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem
2016-04-01
A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun
2017-03-01
A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.
Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model
NASA Astrophysics Data System (ADS)
Zhua, Ningning; Jiaa, Yonghong; Luo, Lun
2016-06-01
The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor
2016-08-01
A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.
Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds
NASA Astrophysics Data System (ADS)
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-07-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model
Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon
2015-01-01
In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172
Ye, Xin; Xu, Jin; Lu, Lijuan; Li, Xinxin; Fang, Xueen; Kong, Jilie
2018-08-14
The use of paper-based methods for clinical diagnostics is a rapidly expanding research topic attracting a great deal of interest. Some groups have attempted to realize an integrated nucleic acid test on a single microfluidic paper chip, including extraction, amplification, and readout functions. However, these studies were not able to overcome complex modification and fabrication requirements, long turn-around times, or the need for sophisticated equipment like pumps, thermal cyclers, or centrifuges. Here, we report an extremely simple paper-based test for the point-of-care diagnosis of rotavirus A, one of the most common pathogens that causes pediatric gastroenteritis. This paper-based test could perform nucleic acid extraction within 5 min, then took 25 min to amplify the target sequence, and the result was visible to the naked eye immediately afterward or quantitative by the UV-Vis absorbance. This low-cost method does not require extra equipment and is easy to use either in a lab or at the point-of-care. The detection limit for rotavirus A was found to be 1 × 10 3 copies/mL. In addition, 100% sensitivity and specificity were achieved when testing 48 clinical stool samples. In conclusion, the present paper-based test fulfills the main requirements for a point-of-care diagnostic tool, and has the potential to be applied to disease prevention, control, and precision diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Filik, Hayati; Sener, Izzet; Cekiç, Sema Demirci; Kiliç, Emine; Apak, Reşat
2006-06-01
In the present paper, conventional spectrophotometry in conjunction with cloud point extraction-preconcentration were investigated as alternative methods for paracetamol (PCT) assay in urine samples. Cloud point extraction (CPE) was employed for the preconcentration of p-aminophenol (PAP) prior to spectrophotometric determination using the non-ionic surfactant Triton X-114 (TX-114) as an extractant. The developed methods were based on acidic hydrolysis of PCT to PAP, which reacted at room temperature with 25,26,27,28-tetrahydroxycalix[4]arene (CAL4) in the presence of an oxidant (KIO(4)) to form an blue colored product. The PAP-CAL4 blue dye formed was subsequently entrapped in the surfactant micelles of Triton X-114. Cloud point phase separation with the aid of Triton X-114 induced by addition of Na(2)SO(4) solution was performed at room temperature as an advantage over other CPE assays requiring elevated temperatures. The 580 nm-absorbance maximum of the formed product was shifted bathochromically to 590 nm with CPE. The working range of 1.5-12 microg ml(-1) achieved by conventional spectrophotometry was reduced down to 0.14-1.5 microg ml(-1) with cloud point extraction, which was lower than those of most literature flow-through assays that also suffer from nonspecific absorption in the UV region. By preconcentrating 10 ml sample solution, a detection limit as low as 40.0 ng ml(-1) was obtained after a single-step extraction, achieving a preconcentration factor of 10. The stoichiometric composition of the dye was found to be 1 : 4 (PAP : CAL4). The impact of a number of parameters such as concentrations of CAL4, KIO(4), Triton X-100 (TX-100), and TX-114, extraction temperature, time periods for incubation and centrifugation, and sample volume were investigated in detail. The determination of PAP in the presence of paracetamol in micellar systems under these conditions is limited. The established procedures were successfully adopted for the determination of PCT in urine samples. Since the drug is rapidly absorbed and excreted largely in urine and its high doses have been associated with lethal hepatic necrosis and renal failure, development of a rapid, sensitive and selective assay of PCT is of vital importance for fast urinary screening and antidote administration before applying more sophisticated, but costly and laborious hyphenated instrumental techniques of HPLC-SPE-NMR-MS.
Elastic dipoles of point defects from atomistic simulations
NASA Astrophysics Data System (ADS)
Varvenne, Céline; Clouet, Emmanuel
2017-12-01
The interaction of point defects with an external stress field or with other structural defects is usually well described within continuum elasticity by the elastic dipole approximation. Extraction of the elastic dipoles from atomistic simulations is therefore a fundamental step to connect an atomistic description of the defect with continuum models. This can be done either by a fitting of the point-defect displacement field, by a summation of the Kanzaki forces, or by a linking equation to the residual stress. We perform here a detailed comparison of these different available methods to extract elastic dipoles, and show that they all lead to the same values when the supercell of the atomistic simulations is large enough and when the anharmonic region around the point defect is correctly handled. But, for small simulation cells compatible with ab initio calculations, only the definition through the residual stress appears tractable. The approach is illustrated by considering various point defects (vacancy, self-interstitial, and hydrogen solute atom) in zirconium, using both empirical potentials and ab initio calculations.
You, Xiangwei; Xing, Zhuokan; Liu, Fengmao; Zhang, Xu
2015-05-22
A novel air assisted liquid-liquid microextraction using the solidification of a floating organic droplet method (AALLME-SFO) was developed for the rapid and simple determination of seven fungicide residues in juice samples, using the gas chromatography with electron capture detector (GC-ECD). This method combines the advantages of AALLME and dispersive liquid-liquid microextraction based on the solidification of floating organic droplets (DLLME-SFO) for the first time. In this method, a low-density solvent with a melting point near room temperature was used as the extraction solvent, and the emulsion was rapidly formed by pulling in and pushing out the mixture of aqueous sample solution and extraction solvent for ten times repeatedly using a 10-mL glass syringe. After centrifugation, the extractant droplet could be easily collected from the top of the aqueous samples by solidifying it at a temperature lower than the melting point. Under the optimized conditions, good linearities with the correlation coefficients (γ) higher than 0.9959 were obtained and the limits of detection (LOD) varied between 0.02 and 0.25 μgL(-1). The proposed method was applied to determine the target fungicides in juice samples and acceptable recoveries ranged from 72.6% to 114.0% with the relative standard deviations (RSDs) of 2.3-13.0% were achieved. Compared with the conventional DLLME method, the newly proposed method will neither require a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.
2018-01-01
A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.
Farthing, William Earl [Pinson, AL; Felix, Larry Gordon [Pelham, AL; Snyder, Todd Robert [Birmingham, AL
2008-02-12
An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.
Farthing, William Earl; Felix, Larry Gordon; Snyder, Todd Robert
2009-12-15
An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.
How to Assess Your Training Needs.
ERIC Educational Resources Information Center
Ceramics, Glass, and Mineral Products Industry Training Board, Harrow (England).
In discussing a method for assessing training needs, this paper deals with various phases of training and points out the importance of outside specialists, the recording of information, and the use of alternative methods. Then five case studies are presented, illustrating each of the industrial groups within the Board's scope: extractives, cement…
D Modeling of Components of a Garden by Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Kumazakia, R.; Kunii, Y.
2016-06-01
Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
A two-stage extraction procedure for insensitive munition (IM) explosive compounds in soils.
Felt, Deborah; Gurtowski, Luke; Nestler, Catherine C; Johnson, Jared; Larson, Steven
2016-12-01
The Department of Defense (DoD) is developing a new category of insensitive munitions (IMs) that are more resistant to detonation or promulgation from external stimuli than traditional munition formulations. The new explosive constituent compounds are 2,4-dinitroanisole (DNAN), nitroguanidine (NQ), and nitrotriazolone (NTO). The production and use of IM formulations may result in interaction of IM component compounds with soil. The chemical properties of these IM compounds present unique challenges for extraction from environmental matrices such as soil. A two-stage extraction procedure was developed and tested using several soil types amended with known concentrations of IM compounds. This procedure incorporates both an acidified phase and an organic phase to account for the chemical properties of the IM compounds. The method detection limits (MDLs) for all IM compounds in all soil types were <5 mg/kg and met non-regulatory risk-based Regional Screening Level (RSL) criteria for soil proposed by the U.S. Army Public Health Center. At defined environmentally relevant concentrations, the average recovery of each IM compound in each soil type was consistent and greater than 85%. The two-stage extraction method decreased the influence of soil composition on IM compound recovery. UV analysis of NTO established an isosbestic point based on varied pH at a detection wavelength of 341 nm. The two-stage soil extraction method is equally effective for traditional munition compounds, a potentially important point when examining soils exposed to both traditional and insensitive munitions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tiwari, Swapnil; Deb, Manas Kanti; Sen, Bhupendra K
2017-04-15
A new cloud point extraction (CPE) method for the determination of hexavalent chromium i.e. Cr(VI) in food samples is established with subsequent diffuse reflectance-Fourier transform infrared (DRS-FTIR) analysis. The method demonstrates enrichment of Cr(VI) after its complexation with 1,5-diphenylcarbazide. The reddish-violet complex formed showed λ max at 540nm. Micellar phase separation at cloud point temperature of non-ionic surfactant, Triton X-100 occurred and complex was entrapped in surfactant and analyzed using DRS-FTIR. Under optimized conditions, the limit of detection (LOD) and quantification (LOQ) were 1.22 and 4.02μgmL -1 , respectively. Excellent linearity with correlation coefficient value of 0.94 was found for the concentration range of 1-100μgmL -1 . At 10μgmL -1 the standard deviation for 7 replicate measurements was found to be 0.11μgmL -1 . The method was successfully applied to commercially marketed food stuffs, and good recoveries (81-112%) were obtained by spiking the real samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Galbeiro, Rafaela; Garcia, Samara; Gaubeur, Ivanise
2014-04-01
Cloud point extraction (CPE) was used to simultaneously preconcentrate trace-level cadmium, nickel and zinc for determination by flame atomic absorption spectrometry (FAAS). 1-(2-Pyridilazo)-2-naphthol (PAN) was used as a complexing agent, and the metal complexes were extracted from the aqueous phase by the surfactant Triton X-114 ((1,1,3,3-tetramethylbutyl)phenyl-polyethylene glycol). Under optimized complexation and extraction conditions, the limits of detection were 0.37μgL(-1) (Cd), 2.6μgL(-1) (Ni) and 2.3μgL(-1) (Zn). This extraction was quantitative with a preconcentration factor of 30 and enrichment factor estimated to be 42, 40 and 43, respectively. The method was applied to different complex samples, and the accuracy was evaluated by analyzing a water standard reference material (NIST SRM 1643e), yielding results in agreement with the certified values. Copyright © 2013 Elsevier GmbH. All rights reserved.
Automated Mounting Bias Calibration for Airborne LIDAR System
NASA Astrophysics Data System (ADS)
Zhang, J.; Jiang, W.; Jiang, S.
2012-07-01
Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.
Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series
NASA Astrophysics Data System (ADS)
Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki
2015-03-01
Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.
2017-12-01
The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.
NASA Astrophysics Data System (ADS)
Delgado, Carlos; Cátedra, Manuel Felipe
2018-05-01
This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.
Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.
2012-07-01
The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.
Klein-Júnior, Luiz C; Viaene, Johan; Salton, Juliana; Koetz, Mariana; Gasper, André L; Henriques, Amélia T; Vander Heyden, Yvan
2016-09-09
Extraction methods evaluation to access plants metabolome is usually performed visually, lacking a truthful method of data handling. In the present study the major aim was developing reliable time- and solvent-saving extraction and fractionation methods to access alkaloid profiling of Psychotria nemorosa leaves. Ultrasound assisted extraction was selected as extraction method. Determined from a Fractional Factorial Design (FFD) approach, yield, sum of peak areas, and peak numbers were rather meaningless responses. However, Euclidean distance calculations between the UPLC-DAD metabolic profiles and the blank injection evidenced the extracts are highly diverse. Coupled with the calculation and plotting of effects per time point, it was possible to indicate thermolabile peaks. After screening, time and temperature were selected for optimization, while plant:solvent ratio was set at 1:50 (m/v), number of extractions at one and particle size at ≤180μm. From Central Composite Design (CCD) results modeling heights of important peaks, previously indicated by the FFD metabolic profile analysis, time was set at 65min and temperature at 45°C, thus avoiding degradation. For the fractionation step, a solid phase extraction method was optimized by a Box-Behnken Design (BBD) approach using the sum of peak areas as response. Sample concentration was consequently set at 150mg/mL, % acetonitrile in dichloromethane at 40% as eluting solvent, and eluting volume at 30mL. Summarized, the Euclidean distance and the metabolite profiles provided significant responses for accessing P. nemorosa alkaloids, allowing developing reliable extraction and fractionation methods, avoiding degradation and decreasing the required time and solvent volume. Copyright © 2016 Elsevier B.V. All rights reserved.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
A New DEM Generalization Method Based on Watershed and Tree Structure
Chen, Yonggang; Ma, Tianwu; Chen, Xiaoyin; Chen, Zhende; Yang, Chunju; Lin, Chenzhi; Shan, Ligang
2016-01-01
The DEM generalization is the basis of multi-dimensional observation, the basis of expressing and analyzing the terrain. DEM is also the core of building the Multi-Scale Geographic Database. Thus, many researchers have studied both the theory and the method of DEM generalization. This paper proposed a new method of generalizing terrain, which extracts feature points based on the tree model construction which considering the nested relationship of watershed characteristics. The paper used the 5 m resolution DEM of the Jiuyuan gully watersheds in the Loess Plateau as the original data and extracted the feature points in every single watershed to reconstruct the DEM. The paper has achieved generalization from 1:10000 DEM to 1:50000 DEM by computing the best threshold. The best threshold is 0.06. In the last part of the paper, the height accuracy of the generalized DEM is analyzed by comparing it with some other classic methods, such as aggregation, resample, and VIP based on the original 1:50000 DEM. The outcome shows that the method performed well. The method can choose the best threshold according to the target generalization scale to decide the density of the feature points in the watershed. Meanwhile, this method can reserve the skeleton of the terrain, which can meet the needs of different levels of generalization. Additionally, through overlapped contour contrast, elevation statistical parameters and slope and aspect analysis, we found out that the W8D algorithm performed well and effectively in terrain representation. PMID:27517296
NASA Astrophysics Data System (ADS)
Caceres, Jhon
Three-dimensional (3D) models of urban infrastructure comprise critical data for planners working on problems in wireless communications, environmental monitoring, civil engineering, and urban planning, among other tasks. Photogrammetric methods have been the most common approach to date to extract building models. However, Airborne Laser Swath Mapping (ALSM) observations offer a competitive alternative because they overcome some of the ambiguities that arise when trying to extract 3D information from 2D images. Regardless of the source data, the building extraction process requires segmentation and classification of the data and building identification. In this work, approaches for classifying ALSM data, separating building and tree points, and delineating ALSM footprints from the classified data are described. Digital aerial photographs are used in some cases to verify results, but the objective of this work is to develop methods that can work on ALSM data alone. A robust approach for separating tree and building points in ALSM data is presented. The method is based on supervised learning of the classes (tree vs. building) in a high dimensional feature space that yields good class separability. Features used for classification are based on the generation of local mappings, from three-dimensional space to two-dimensional space, known as "spin images" for each ALSM point to be classified. The method discriminates ALSM returns in compact spaces and even where the classes are very close together or overlapping spatially. A modified algorithm of the Hough Transform is used to orient the spin images, and the spin image parameters are specified such that the mutual information between the spin image pixel values and class labels is maximized. This new approach to ALSM classification allows us to fully exploit the 3D point information in the ALSM data while still achieving good class separability, which has been a difficult trade-off in the past. Supported by the spin image analysis for obtaining an initial classification, an automatic approach for delineating accurate building footprints is presented. The physical fact that laser pulses that happen to strike building edges can produce very different 1st and last return elevations has been long recognized. However, in older generation ALSM systems (<50 kHz pulse rates) such points were too few and far between to delineate building footprints precisely. Furthermore, without the robust separation of nearby trees and vegetation from the buildings, simply extracting ALSM shots where the elevation of the first return was much higher than the elevation of the last return, was not a reliable means of identifying building footprints. However, with the advent of ALSM systems with pulse rates in excess of 100 kHz, and by using spin-imaged based segmentation, it is now possible to extract building edges from the point cloud. A refined classification resulting from incorporating "on-edge" information is developed for obtaining quadrangular footprints. The footprint fitting process involves line generalization, least squares-based clustering and dominant points finding for segmenting individual building edges. In addition, an algorithm for fitting complex footprints using the segmented edges and data inside footprints is also proposed.
Kim, Seung-Cheol; Kim, Eun-Soo
2009-02-20
In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.
Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L
2013-06-01
Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.
Drawing for Traffic Marking Using Bidirectional Gradient-Based Detection with MMS LIDAR Intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Nakamura, K.
2016-06-01
Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS) equipped with high-density Laser imaging Detection and Ranging (LiDAR) scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014). The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.
Diethylstilbestrol in fish tissue determined through subcritical fluid extraction and with GC-MS
NASA Astrophysics Data System (ADS)
Qiao, Qinghui; Shi, Nianrong; Feng, Xiaomei; Lu, Jie; Han, Yuqian; Xue, Changhu
2016-06-01
As the key point in sex hormone analysis, sample pre-treatment technology has attracted scientists' attention all over the world, and the development trend of sample preparation forwarded to faster and more efficient technologies. Taking economic and environmental concerns into account, subcritical fluid extraction as a faster and more efficient method has stood out as a sample pre-treatment technology. This new extraction technology can overcome the shortcomings of supercritical fluid and achieve higher extraction efficiency at relatively low pressures and temperatures. In this experiment, a simple, sensitive and efficient method has been developed for the determination of diethylstilbestrol (DES) in fish tissue using subcritical 1,1,1,2-tetrafluoroethane (R134a) extraction in combination with gas chromatography-mass spectrometry (GC-MS). After extraction, freezing-lipid filtration was utilized to remove fatty co-extract. Further purification steps were performed with C18 and NH2 solid phase extraction (SPE). Finally, the analyte was derived by heptafluorobutyric anhydride (HFBA), followed by GC-MS analysis. Response surface methodology (RSM) was employed to optimizing the extraction condition, and the optimized was as follows: extraction pressure, 4.3 MPa; extraction temperature, 26°C; amount of co-solvent volume, 4.7 mL. Under this condition, at a spiked level of 1, 5, 10 μg kg-1, the mean recovery of DES was more than 90% with relative standard deviations (RSDs) less than 10%. Finally, the developed method has been successfully used to analyzing the real samples.
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
Turnipseed, Sherri B; Storey, Joseph M; Lohne, Jack J; Andersen, Wendy C; Burger, Robert; Johnson, Aaron S; Madson, Mark R
2017-08-30
A screening method for veterinary drug residues in fish, shrimp, and eel using LC with a high-resolution MS instrument has been developed and validated. The method was optimized for over 70 test compounds representing a variety of veterinary drug classes. Tissues were extracted by vortex mixing with acetonitrile acidified with 2% acetic acid and 0.2% p-toluenesulfonic acid. A centrifuged portion of the extract was passed through a novel solid phase extraction cartridge designed to remove interfering matrix components from tissue extracts. The eluent was then evaporated and reconstituted for analysis. Data were collected with a quadrupole-Orbitrap high-resolution mass spectrometer using both nontargeted and targeted acquisition methods. Residues were detected on the basis of the exact mass of the precursor and a product ion along with isotope pattern and retention time matching. Semiquantitative data analysis compared MS 1 signal to a one-point extracted matrix standard at a target testing level. The test compounds were detected and identified in salmon, tilapia, catfish, shrimp, and eel extracts fortified at the target testing levels. Fish dosed with selected analytes and aquaculture samples previously found to contain residues were also analyzed. The screening method can be expanded to monitor for an additional >260 veterinary drugs on the basis of exact mass measurements and retention times.
NASA Technical Reports Server (NTRS)
Newman, M. B.; Pipano, A.
1973-01-01
A new eigensolution routine, FEER (Fast Eigensolution Extraction Routine), used in conjunction with NASTRAN at Israel Aircraft Industries is described. The FEER program is based on an automatic matrix reduction scheme whereby the lower modes of structures with many degrees of freedom can be accurately extracted from a tridiagonal eigenvalue problem whose size is of the same order of magnitude as the number of required modes. The process is effected without arbitrary lumping of masses at selected node points or selection of nodes to be retained in the analysis set. The results of computational efficiency studies are presented, showing major arithmetic operation counts and actual computer run times of FEER as compared to other methods of eigenvalue extraction, including those available in the NASTRAN READ module. It is concluded that the tridiagonal reduction method used in FEER would serve as a valuable addition to NASTRAN for highly increased efficiency in obtaining structural vibration modes.
Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data
Qin, Xinyan; Wu, Gongping; Fan, Fei
2018-01-01
Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from “layer” to “block” according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection. PMID:29690560
Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data.
Qin, Xinyan; Wu, Gongping; Lei, Jin; Fan, Fei; Ye, Xuhui
2018-04-22
Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from "layer" to "block" according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection.
NASA Astrophysics Data System (ADS)
Yu, P.; Wu, H.; Liu, C.; Xu, Z.
2018-04-01
Diagnosis of water leakage in metro tunnels is of great significance to the metro tunnel construction and the safety of metro operation. A method that integrates laser scanning and infrared thermal imaging is proposed for the diagnosis of water leakage. The diagnosis of water leakage in this paper is mainly divided into two parts: extraction of water leakage geometry information and extraction of water leakage attribute information. Firstly, the suspected water leakage is obtained by threshold segmentation based on the point cloud of tunnel. And the real water leakage is obtained by the auxiliary interpretation of infrared thermal images. Then, the characteristic of isotherm outline is expressed by solving Centroid Distance Function to determine the type of water leakage. Similarly, the location of leakage silt and the direction of crack are calculated by finding coordinates of feature points on Centroid Distance Function. Finally, a metro tunnel part in Shanghai was selected as the case area to make experiment and the result shown that the proposed method in this paper can be used to diagnosis water leakage disease completely and accurately.
Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo
2014-01-01
Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677
NASA Astrophysics Data System (ADS)
Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo
2018-05-01
An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
A PROCESS FOR SEPARATING AZEOTROPIC MIXTURES BY EXTRACTIVE AND CONVECTIVE DISTILLATION
Frazer, J.W.
1961-12-19
A method is described for separating an azeotrope of carbon tetrachloride and 1,1,2,2-tetrafluorodinitroethane boiling at 60 deg C. The ndethod comnprises, specifically, feeding azeotrope vapors admixed with a non- reactive gas into an extractive distillation column heated to a temperature preferably somewhat above the boiling point of the constant boiling mixture. A solvent, di-n-butylphthalate, is metered into the column above the gas inlet and permitted to flow downward, earrying with it the higher bomling fraction, while the constituent having the lower boiling point passes out of the top of the column with the non-reactive gas and is collected in a nitrogen cold trap. Other solvents which alter the vapor pressure relationship may be substituted. The method is generally applicable to azeotropic mixtures. A number of specific mixtures whicb may be separated are disclosed. (AEC)
A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.
Yang, Wei; Ai, Tinghua; Lu, Wei
2018-04-19
Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.
A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories
Yang, Wei
2018-01-01
Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Heidarizadi, Elham; Tabaraki, Reza
2016-01-01
A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Kori, Shivpoojan; Parmar, Ankush; Goyal, Jony; Sharma, Shweta
2018-02-01
A procedure for the determination of Eszopiclone (ESZ) from complex matrices i.e. in vitro (spiked matrices), as well as in vivo (mice model) was developed using cloud point extraction coupled with microwave-assisted back-extraction (CPE-MABE). Analytical measurements have been carried using UV-Visible, HPLC and MS techniques. The proposed method has been validated according to ICH guidelines and legitimate reproducible and reliability of protocol is assessed through intraday and inter-day precision <3.61% and <4.70%, respectively. Limit of detection has been obtained as 0.083μg/mL and 0.472μg/mL respectively, for HPLC and UV-Visible techniques, corresponding to assessed linearity range. The coaservate phase in CPE was back extracted under microwaves exposure, with isooctane at pre-concentration factor ~50 when 5mL of sample solution was pre-concentrated to 0.1mL. Under optimized conditions i.e. Aqueous-Triton X-114 4% (w/v), pH4.0, NaCl 4% (w/v) and equilibrium temperature of 45°C for 20min, average extraction recovery has been obtained between 89.8 and 99.2% and 84.0-99.2% from UV-Visible and HPLC analysis, respectively. The method has been successfully applied to the pharmacokinetic estimation (post intraperitoneal administration) of ESZ in mice. MS analysis precisely depicted the presence of active N‑desmethyl zopiclone in impales as well as in mice plasma. Copyright © 2018 Elsevier B.V. All rights reserved.
A contour-based shape descriptor for biomedical image classification and retrieval
NASA Astrophysics Data System (ADS)
You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-12-01
Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
Semantic data association for planar features in outdoor 6D-SLAM using lidar
NASA Astrophysics Data System (ADS)
Ulas, C.; Temeltas, H.
2013-05-01
Simultaneous Localization and Mapping (SLAM) is a fundamental problem of the autonomous systems in GPS (Global Navigation System) denied environments. The traditional probabilistic SLAM methods uses point features as landmarks and hold all the feature positions in their state vector in addition to the robot pose. The bottleneck of the point-feature based SLAM methods is the data association problem, which are mostly based on a statistical measure. The data association performance is very critical for a robust SLAM method since all the filtering strategies are applied after a known correspondence. For point-features, two different but very close landmarks in the same scene might be confused while giving the correspondence decision when their positions and error covariance matrix are solely taking into account. Instead of using the point features, planar features can be considered as an alternative landmark model in the SLAM problem to be able to provide a more consistent data association. Planes contain rich information for the solution of the data association problem and can be distinguished easily with respect to point features. In addition, planar maps are very compact since an environment has only very limited number of planar structures. The planar features does not have to be large structures like building wall or roofs; the small plane segments can also be used as landmarks like billboards, traffic posts and some part of the bridges in urban areas. In this paper, a probabilistic plane-feature extraction method from 3DLiDAR data and the data association based on the extracted semantic information of the planar features is introduced. The experimental results show that the semantic data association provides very satisfactory result in outdoor 6D-SLAM.
Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk
2016-01-01
A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.
NASA Astrophysics Data System (ADS)
Gong, Y.; Yang, Y.; Yang, X.
2018-04-01
For the purpose of extracting productions of some specific branching plants effectively and realizing its 3D reconstruction, Terrestrial LiDAR data was used as extraction source of production, and a 3D reconstruction method based on Terrestrial LiDAR technologies combined with the L-system was proposed in this article. The topology structure of the plant architectures was extracted using the point cloud data of the target plant with space level segmentation mechanism. Subsequently, L-system productions were obtained and the structural parameters and production rules of branches, which fit the given plant, was generated. A three-dimensional simulation model of target plant was established combined with computer visualization algorithm finally. The results suggest that the method can effectively extract a given branching plant topology and describes its production, realizing the extraction of topology structure by the computer algorithm for given branching plant and also simplifying the extraction of branching plant productions which would be complex and time-consuming by L-system. It improves the degree of automation in the L-system extraction of productions of specific branching plants, providing a new way for the extraction of branching plant production rules.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen
2016-06-01
High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.
2001-06-01
Pump Exposed Capillary Fringe SVE System Pneumatic/ Hydraulic Fracturing Points Increased Advective Flow draw\\svehandbk1.cdr aee p1 4/5/01 022/736300...propagate further from the extraction well, increasing the advective flow zone round the well. Pneumatic and hydraulic fracturing are the primary methods...enhancing existing fractures and increasing the secondary fracture network. Hydraulic fracturing involves the injection of water or slurry into the
Extraction of linear features on SAR imagery
NASA Astrophysics Data System (ADS)
Liu, Junyi; Li, Deren; Mei, Xin
2006-10-01
Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.
Jeon, Sangil; Han, Seunghoon; Lee, Jongtae; Hong, Taegon; Yim, Dong-Seok
2012-08-01
We analyzed the pharmacokinetics of C3G on data from twelve subjects, after 2-week multiple dosing of black bean (Phaseolus vulgaris, Cheongjakong-3-ho) seed coat extract, using the mixed effect analysis method (NONMEM, Ver. 6.2), as well as the conventional non-compartmental method. We also examined the safety and tolerability. The PK analysis used plasma concentrations of the C3G on day 1 and 14. There was no observed accumulation of C3G after 2-week multiple dosing of black bean seed coat extract. The typical point estimates of PK were CL (clearance)=3,420 l/h, V (volume)=7,280 L, Ka (absorption constant)=9.94 h(-1), ALAG (lag time)=0.217 h. The black bean seed coat extract was well tolerated and there were no serious adverse events. In this study, we confirmed that a significant amount of C3G was absorbed in human after given the black bean seed coat extract.
[The progress in speciation analysis of trace elements by atomic spectrometry].
Wang, Zeng-Huan; Wang, Xu-Nuo; Ke, Chang-Liang; Lin, Qin
2013-12-01
The main purpose of the present work is to review the different non-chromatographic methods for the speciation analysis of trace elements in geological, environmental, biological and medical areas. In this paper, the sample processing methods in speciation analysis were summarized, and the main strategies for non-chromatographic technique were evaluated. The basic principles of the liquid extractions proposed in the published literatures recently and their advantages and disadvantages were discussed, such as conventional solvent extraction, cloud point extraction, single droplet microextraction, and dispersive liquid-liquid microextraction. Solid phase extraction, as a non-chromatographic technique for speciation analysis, can be used in batch or in flow detection, and especially suitable for the online connection to atomic spectrometric detector. The developments and applications of sorbent materials filled in the columns of solid phase extraction were reviewed. The sorbents include chelating resins, nanometer materials, molecular and ion imprinted materials, and bio-sorbents. Other techniques, e. g. hydride generation technique and coprecipitation, were also reviewed together with their main applications.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y., E-mail: thuzhangyu@foxmail.com; Huang, S. L., E-mail: huangsling@tsinghua.edu.cn; Wang, S.
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency formore » all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.« less
Zhang, Y; Huang, S L; Wang, S; Zhao, W
2016-05-01
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert-Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.
Recognition and defect detection of dot-matrix text via variation-model based learning
NASA Astrophysics Data System (ADS)
Ohyama, Wataru; Suzuki, Koushi; Wakabayashi, Tetsushi
2017-03-01
An algorithm for recognition and defect detection of dot-matrix text printed on products is proposed. Extraction and recognition of dot-matrix text contains several difficulties, which are not involved in standard camera-based OCR, that the appearance of dot-matrix characters is corrupted and broken by illumination, complex texture in the background and other standard characters printed on product packages. We propose a dot-matrix text extraction and recognition method which does not require any user interaction. The method employs detected location of corner points and classification score. The result of evaluation experiment using 250 images shows that recall and precision of extraction are 78.60% and 76.03%, respectively. Recognition accuracy of correctly extracted characters is 94.43%. Detecting printing defect of dot-matrix text is also important in the production scene to avoid illegal productions. We also propose a detection method for printing defect of dot-matrix characters. The method constructs a feature vector of which elements are classification scores of each character class and employs support vector machine to classify four types of printing defect. The detection accuracy of the proposed method is 96.68 %.
NASA Astrophysics Data System (ADS)
Lee, Seon Jeng; Kim, Chaewon; Jung, Seok-Heon; Di Pietro, Riccardo; Lee, Jin-Kyun; Kim, Jiyoung; Kim, Miso; Lee, Mi Jung
2018-01-01
Ambipolar organic field-effect transistors (OFETs) have both of hole and electron enhancements in charge transport. The characteristics of conjugated diketopyrrolopyrrole ambipolar OFETs depend on the metal-contact surface treatment for charge injection. To investigate the charge-injection characteristics of ambipolar transistors, these devices are processed via various types of self-assembled monolayer treatments and annealing. We conclude that treatment by the self-assembled monolayer 1-decanethiol gives the best enhancement of electron charge injection at both 100 and 300 °C annealing temperature. In addition, the contact resistance is calculated by using two methods: One is the gated four-point probe (gFPP) method that gives the voltage drop between channels, and the other is the simultaneous contact resistance extraction method, which extracts the contact resistance from the general transfer curve. We confirm that the gFPP method and the simultaneous extraction method give similar contact resistance, which means that we can extract contact resistance from the general transfer curve without any special contact pattern. Based on these characteristics of ambipolar p- and n-type transistors, we fabricate inverter devices with only one active layer. [Figure not available: see fulltext.
Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh
2013-06-01
Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard
2016-04-01
Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of KARESZ project for whole Budapest. BSz contributed as an Alexander von Humboldt Research Fellow.
Dai, Liping; Cheng, Jing; Matsadiq, Guzalnur; Liu, Lu; Li, Jun-Kai
2010-08-03
In the proposed method, an extraction solvent with a lower toxicity and density than the solvents typically used in dispersive liquid-liquid microextraction was used to extract seven polychlorinated biphenyls (PCBs) from aqueous samples. Due to the density and melting point of the extraction solvent, the extract which forms a layer on top of aqueous sample can be collected by solidifying it at low temperatures, which form a layer on top of the aqueous sample. Furthermore, the solidified phase can be easily removed from the aqueous phase. Based on preliminary studies, 1-undecanol was selected as the extraction solvent, and a series of parameters that affect the extraction efficiency were systematically investigated. Under the optimized conditions, enrichment factors for PCBs ranged between 494 and 606. Based on a signal-to-noise ratio of 3, the limit of detection for the method ranged between 3.3 and 5.4 ng L(-1). Good linearity, reproducibility and recovery were also obtained. 2010 Elsevier B.V. All rights reserved.
Alternative Methods for Estimating Plane Parameters Based on a Point Cloud
NASA Astrophysics Data System (ADS)
Stryczek, Roman
2017-12-01
Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.
Yet another method for triangulation and contouring for automated cartography
NASA Technical Reports Server (NTRS)
De Floriani, L.; Falcidieno, B.; Nasy, G.; Pienovi, C.
1982-01-01
An algorithm is presented for hierarchical subdivision of a set of three-dimensional surface observations. The data structure used for obtaining the desired triangulation is also singularly appropriate for extracting contours. Some examples are presented, and the results obtained are compared with those given by Delaunay triangulation. The data points selected by the algorithm provide a better approximation to the desired surface than do randomly selected points.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Procedure for Mixing Base Fluids With Sediments (EPA Method 1646) 3 Appendix 3 to Subpart A of Part 435 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) OIL AND GAS EXTRACTION POINT...
NASA Astrophysics Data System (ADS)
Tariba, N.; Bouknadel, A.; Haddou, A.; Ikken, N.; Omari, Hafsa El; Omari, Hamid El
2017-01-01
The Photovoltaic Generator have a nonlinear characteristic function relating the intensity at the voltage I = f (U) and depend on the variation of solar irradiation and temperature, In addition, its point of operation depends directly on the load that it supplies. To fix this drawback, and to extract the maximum power available to the terminal of the generator, an adaptation stage is introduced between the generator and the load to couple the two elements as perfectly as possible. The adaptation stage is associated with a command called MPPT MPPT (Maximum Power Point Tracker) whose is used to force the PVG to operate at the MPP (Maximum Power Point) under variation of climatic conditions and load variation. This paper presents a comparative study between the adaptive controller for PV Systems using MIT rules and Lyapunov method to regulate the PV voltage. The Incremental Conductance (IC) algorithm is used to extract the maximum power from the PVG by calculating the voltage Vref, and the adaptive controller is used to regulate and track quickly the PV voltage. The two methods of the adaptive controller will be compared to prove their performance by using the PSIM tools and experimental test, and the mathematical model of step-up with PVG model will be presented.
Text-in-Context: A Method for Extracting Findings in Mixed-Methods Mixed Research Synthesis Studies
Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L.
2012-01-01
Aim Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. Background International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. Data source The data extraction challenges described here were encountered and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011–2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. Discussion To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. Implications for nursing The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. Conclusion This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. PMID:22924808
NASA Astrophysics Data System (ADS)
Wu, Peng; Zhang, Yunchang; Lv, Yi; Hou, Xiandeng
2006-12-01
A simple, low cost and highly sensitive method based on cloud point extraction (CPE) for separation/preconcentration and thermospray flame quartz furnace atomic absorption spectrometry was proposed for the determination of ultratrace cadmium in water and urine samples. The analytical procedure involved the formation of analyte-entrapped surfactant micelles by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution and a Triton X-114 solution. When the temperature of the system was higher than the cloud point of Triton X-114, the complex of cadmium-PDC entered the surfactant-rich phase and thus separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, the limit of detection was 0.04 μg/L for cadmium with a sample volume of 10 mL. The analytical results of cadmium in water and urine samples agreed well with those by ICP-MS.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
2015-01-01
Potato (Solanum tuberosum L.) is a worldwide food staple, but substantial waste accompanies the cultivation of this crop due to wounding of the outer skin and subsequent unfavorable healing conditions. Motivated by both economic and nutritional considerations, this metabolite profiling study aims to improve understanding of closing layer and wound periderm formation and guide the development of new methods to ensure faster and more complete healing after skin breakage. The polar metabolites of wound-healing tissues from four potato cultivars with differing patterns of tuber skin russeting (Norkotah Russet, Atlantic, Chipeta, and Yukon Gold) were analyzed at three and seven days after wounding, during suberized closing layer formation and nascent wound periderm development, respectively. The polar extracts were assessed using LC-MS and NMR spectroscopic methods, including multivariate analysis and tentative identification of 22 of the 24 biomarkers that discriminate among the cultivars at a given wound-healing time point or between developmental stages. Differences among the metabolites that could be identified from NMR- and MS-derived biomarkers highlight the strengths and limitations of each method, also demonstrating the complementarity of these approaches in terms of assembling a complete molecular picture of the tissue extracts. Both methods revealed that differences among the cultivar metabolite profiles diminish as healing proceeds during the period following wounding. The biomarkers included polyphenolic amines, flavonoid glycosides, phenolic acids and glycoalkaloids. Because wound healing is associated with oxidative stress, the free radical scavenging activities of the extracts from different cultivars were measured at each wounding time point, revealing significantly higher scavenging activity of the Yukon Gold periderm especially after 7 days of wounding. PMID:24998264
Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m(2), 5074.1790 m(3) and 1316.1250 m(2), 1591.5784 m(3), respectively. The results of the study provide a new method for the quantitative study of small gully erosion.
Wen, Yingying; Li, Jinhua; Liu, Junshen; Lu, Wenhui; Ma, Jiping; Chen, Lingxin
2013-07-01
A dual cloud point extraction (dCPE) off-line enrichment procedure coupled with a hydrodynamic-electrokinetic two-step injection online enrichment technique was successfully developed for simultaneous preconcentration of trace phenolic estrogens (hexestrol, dienestrol, and diethylstilbestrol) in water samples followed by micellar electrokinetic chromatography (MEKC) analysis. Several parameters affecting the extraction and online injection conditions were optimized. Under optimal dCPE-two-step injection-MEKC conditions, detection limits of 7.9-8.9 ng/mL and good linearity in the range from 0.05 to 5 μg/mL with correlation coefficients R(2) ≥ 0.9990 were achieved. Satisfactory recoveries ranging from 83 to 108% were obtained with lake and tap water spiked at 0.1 and 0.5 μg/mL, respectively, with relative standard deviations (n = 6) of 1.3-3.1%. This method was demonstrated to be convenient, rapid, cost-effective, and environmentally benign, and could be used as an alternative to existing methods for analyzing trace residues of phenolic estrogens in water samples.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skraba, Primoz; Rosen, Paul; Wang, Bei
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.
Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.
Automatic extraction of the mid-sagittal plane using an ICP variant
NASA Astrophysics Data System (ADS)
Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus
2008-03-01
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.
Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion
Skraba, Primoz; Rosen, Paul; Wang, Bei; ...
2016-02-29
Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with amore » guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. Here, we apply our method to synthetic and simulation datasets to demonstrate its effectiveness.« less
Wang, Jiaming; Gambetta, Joanna M; Jeffery, David W
2016-05-18
Two rosé wines, representing a tropical and a fruity/floral style, were chosen from a previous study for further exploration by aroma extract dilution analysis (AEDA) and quantitative analysis. Volatiles were extracted using either liquid-liquid extraction (LLE) followed by solvent-assisted flavor evaporation (SAFE) or a recently developed dynamic headspace (HS) sampling method utilizing solid-phase extraction (SPE) cartridges. AEDA was conducted using gas chromatography-mass spectrometry/olfactometry (GC-MS/O) and a total of 51 aroma compounds with a flavor dilution (FD) factor ≥3 were detected. Quantitative analysis of 92 volatiles was undertaken in both wines for calculation of odor activity values. The fruity and floral wine style was mostly driven by 2-phenylethanol, β-damascenone, and a range of esters, whereas 3-SHA and several volatile acids were seen as essential for the tropical style. When extraction methods were compared, HS-SPE was as efficient as SAFE for extracting most esters and higher alcohols, which were associated with fruity and floral characters, but it was difficult to capture volatiles with greater polarity or higher boiling point that may still be important to perceived wine aroma.
Techno-economical evaluation of protein extraction for microalgae biorefinery
NASA Astrophysics Data System (ADS)
Sari, Y. W.; Sanders, J. P. M.; Bruins, M. E.
2016-01-01
Due to scarcity of fossil feedstocks, there is an increasing demand for biobased fuels. Microalgae are considered as promising biobased feedstocks. However, microalgae based fuels are not yet produced at large scale at present. Applying biorefinery, not only for oil, but also for other components, such as carbohydrates and protein, may lead to the sustainable and economical microalgae-based fuels. This paper discusses two relatively mild conditions for microalgal protein extraction, based on alkali and enzymes. Green microalgae (Chlorella fusca) with and without prior lipid removal were used as feedstocks. Under mild conditions, more protein could be extracted using proteases, with the highest yields for microalgae meal (without lipids). The data on protein extraction yields were used to calculate the costs for producing 1 ton of microalgal protein. The processing cost for the alkaline method was € 2448 /ton protein. Enzymatic method performed better from an economic point of view with € 1367 /ton protein on processing costs. However, this is still far from industrially feasible. For both extraction methods, biomass cost per ton of produced product were high. A higher protein extraction yield can partially solve this problem, lowering processing cost to €620 and 1180 /ton protein product, using alkali and enzyme, respectively. Although alkaline method has lower processing cost, optimization appears to be better achievable using enzymes. If the enzymatic method can be optimized by lowering the amount of alkali added, leading to processing cost of € 633/ton protein product. Higher revenue can be generated when the residue after protein extraction can be sold as fuel, or better as a highly digestible feed for cattle.
Ingenious Snake: An Adaptive Multi-Class Contours Extraction
NASA Astrophysics Data System (ADS)
Li, Baolin; Zhou, Shoujun
2018-04-01
Active contour model (ACM) plays an important role in computer vision and medical image application. The traditional ACMs were used to extract single-class of object contours. While, simultaneous extraction of multi-class of interesting contours (i.e., various contours with closed- or open-ended) have not been solved so far. Therefore, a novel ACM model named “Ingenious Snake” is proposed to adaptively extract these interesting contours. In the first place, the ridge-points are extracted based on the local phase measurement of gradient vector flow field; the consequential ridgelines initialization are automated with high speed. Secondly, the contours’ deformation and evolvement are implemented with the ingenious snake. In the experiments, the result from initialization, deformation and evolvement are compared with the existing methods. The quantitative evaluation of the structure extraction is satisfying with respect of effectiveness and accuracy.
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
Hartmann, Georg; Baumgartner, Tanja; Schuster, Michael
2014-01-07
For the quantification of silver nanoparticles (Ag-NPs) in environmental samples using cloud point extraction (CPE) for selective enrichment, surface modification of the Ag-NPs and matrix effects can play a key role. In this work we validate CPE with respect to the influence of different coatings and naturally occurring matrix components. The Ag-NPs tested were functionalized with inorganic and organic compounds as well as with biomolecules. Commercially available NPs and NPs synthesized according to methods published in the literature were used. We found that CPE can extract almost all Ag-NPs tested with very good efficiencies (82-105%). Only Ag-NPs functionalized with BSA (bovine serum albumin), which is a protein with the function to keep colloids in solution, cannot be extracted. No or little effect of environmentally relevant salts, organic matter, and inorganic colloids on the CPE of AgNPs was found. Additionally we used CPE to observe the in situ formation of Ag-NPs produced by the reduction of Ag(+) with natural organic matter (NOM).
Branavan, Manoharanehru; Mackay, Ruth E; Craw, Pascal; Naveenathayalan, Angel; Ahern, Jeremy C; Sivanesan, Tulasi; Hudson, Chris; Stead, Thomas; Kremer, Jessica; Garg, Neha; Baker, Mark; Sadiq, Syed T; Balachandran, Wamadeva
2016-08-01
This paper presents the design of a modular point of care test platform that integrates a proprietary sample collection device directly with a microfluidic cartridge. Cell lysis, within the cartridge, is conducted using a chemical method and nucleic acid purification is done on an activated cellulose membrane. The microfluidic device incorporates passive mixing of the lysis-binding buffers and sample using a serpentine channel. Results have shown extraction efficiencies for this new membrane of 69% and 57% compared to the commercial Qiagen extraction method of 85% and 59.4% for 0.1ng/µL and 100ng/µL salmon sperm DNA respectively spiked in phosphate buffered solution. Extraction experiments using the serpentine passive mixer cartridges incorporating lysis and nucleic acid purification showed extraction efficiency around 80% of the commercial Qiagen kit. Isothermal amplification was conducted using thermophillic helicase dependant amplification and recombinase polymerase amplification. A low cost benchtop real-time isothermal amplification platform has been developed capable of running six amplifications simultaneously. Results show that the platform is capable of detecting 1.32×10(6) of sample DNA through thermophillic helicase dependant amplification and 1×10(5) copy numbers Chlamydia trachomatis genomic DNA within 10min through recombinase polymerase nucleic acid amplification tests. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Gillespie, Peter J.; Gambus, Agnieszka; Blow, J. Julian
2012-01-01
The use of cell-free extracts prepared from eggs of the South African clawed toad, Xenopus laevis, has led to many important discoveries in cell cycle research. These egg extracts recapitulate the key nuclear transitions of the eukaryotic cell cycle in vitro under apparently the same controls that exist in vivo. DNA added to the extract is first assembled into a nucleus and is then efficiently replicated. Progression of the extract into mitosis then allows the separation of paired sister chromatids. The Xenopus cell-free system is therefore uniquely suited to the study of the mechanisms, dynamics and integration of cell cycle regulated processes at a biochemical level. In this article we describe methods currently in use in our laboratory for the preparation of Xenopus egg extracts and demembranated sperm nuclei for the study of DNA replication in vitro. We also detail how DNA replication can be quantified in this system. In addition, we describe methods for isolating chromatin and chromatin-bound protein complexes from egg extracts. These recently developed and revised techniques provide a practical starting point for investigating the function of proteins involved in DNA replication. PMID:22521908
Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images
NASA Astrophysics Data System (ADS)
Liang, Y.; Sheng, Y. H.
2018-04-01
To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.
Enhancing Biomedical Text Summarization Using Semantic Relation Extraction
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336
Graph-based geometric-iconic guide-wire tracking.
Honnorat, Nicolas; Vaillant, Régis; Paragios, Nikos
2011-01-01
In this paper we introduce a novel hybrid graph-based approach for Guide-wire tracking. The image support is captured by steerable filters and improved through tensor voting. Then, a graphical model is considered that represents guide-wire extraction/tracking through a B-spline control-point model. Points with strong geometric interest (landmarks) are automatically determined and anchored to such a representation. Tracking is then performed through discrete MRFs that optimize the spatio-temporal positions of the control points while establishing landmark temporal correspondences. Promising results demonstrate the potentials of our method.
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
[Preemptive analgesia with loxoprofen sodiumorally in extraction of impacted teeth].
Meng, T; Zhang, Z Y; Zhang, X; Chen, Y H; Li, J Q; Chen, Q; Liu, W S; Gao, W
2018-02-18
To investigate the effectiveness of preemptive analgesia with loxoprofen sodium orally, which was a kind of non-steroid anti-inflammatory drugs, in extractions of mandibular impacted third teeth. There were questionnaires about postoperative pain for patients whose mandibular impacted third teeth were extracted from July 2017 to August 2017 in First Clinical Division of Peking University School and Hospital of Stomatology. All the patients did their routine clinical examinations and imaging examinations. After their mandibular impacted third teeth were extracted, the questionnaires were sent to them. The questionnaires were filled in by the patients on their own and returned one week later. There were 120 questionnaires that were sent and 105 questionnaires returned, of which 98 questionnaires were filled in completely. According to the inclusive criteria and exclusion criteria, 66 questionnaires were totally selected in this study. According to the time when the patients took their loxoprofen sodium orally firstly, the patients were divided into 3 groups. The first group was for patients who didn't take loxoprofen sodium during their extractions (non-medicine group). The second group was for patients who took 60 mg loxoprofen sodium 30 min before their extractions (preoperative group). The third group was for patients who took 60 mg loxoprofen sodium 30 min after their extractions (postoperative group). The operation time among the 3 groups was analyzed by Kruskal-Wallis method. The postoperative time points were 2, 4, 12,24 and 48 h after operation. The scores of visual analogue scales (VAS) for postoperative pain in each group at different postoperative time points were analyzed by Friedman method. At each postoperative time point, VAS scores in the different groups were analyzed by Kruskal-Wallis me-thod. The numbers of the patients taking loxoprofen sodium home and drug adverse reactions were also analyzed. The operation time of the 3 groups was 15.0 (5.0,30.0) min and had no significant differences (P=0.848).VAS scores of non-medicine group 2,4, 12,24 and 48 h after operation were 1.75 (0.1,10.0), 6.25 (1.5,10.0), 2.00 (0.1,8.0), 2.00 (0.1,6.0) and 0.5 (0.1,5.5) separately and had significant differences (P<0.001).The VAS score at 4 h after operation was higher than the VAS scores at other time points after operation (P<0.005). Four hours after the operations, the VAS scores of preoperative group [2.0 (0.1,10.0)] and postoperative group [2.0 (0.1,5.0)] were lower significantly than those of non-medicine group [6.25 (1.5,10.0)] (P<0.001).The numbers of the patients taking loxoprofen sodium home were 9(40.9%) in non-medicine group,5(21.8%) in preoperative group and 7(33.3%) in postoperative group. The number of the patients who had drug adverse reactions in preoperative group (n=3,13.0%) and in postoperative group (n=4,19.0%) was less than the number of the patients who had drug adverse reactions in non-medicine group (n=8,36.4%). There were two protocols of preemptive analgesia with loxoprofen sodium orally in extractions of mandibular impacted third teeth, which were taking 60 mg loxoprofen sodium orally 30 min before the extractions and taking 60 mg loxoprofen sodium orally 30 min after the extractions. Both of the two preemptive analgesia protocols could decrease the postoperative pain significantly.
Xie, Wei-Qi; Chai, Xin-Sheng
2016-04-22
This paper describes a new method for the rapid determination of the moisture content in paper materials. The method is based on multiple headspace extraction gas chromatography (MHE-GC) at a temperature above the boiling point of water, from which an integrated water loss from the tested sample due to evaporation can be measured and from which the moisture content in the sample can be determined. The results show that the new method has a good precision (with the relative standard deviation <0.96%), high sensitivity (the limit of quantitation=0.005%) and good accuracy (the relative differences <1.4%). Therefore, the method is quite suitable for many uses in research and industrial applications. Copyright © 2016 Elsevier B.V. All rights reserved.
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
NASA Astrophysics Data System (ADS)
Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen
2018-02-01
Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.
NASA Astrophysics Data System (ADS)
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-01
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO22 +-Sal1] and [UO22 +-Sal2]. Among them, [UO22 +-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO22 +-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO22 +-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57 ng mL- 1, the linear regression equation was ΔF = 438.0 c (ng mL- 1) + 175.6 with the correlation coefficient r = 0.9981. The limit of detection was 0.066 ng mL- 1. The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed.
Mohd, N I; Zain, N N M; Raoov, M; Mohamad, S
2018-04-01
A new cloud point methodology was successfully used for the extraction of carcinogenic pesticides in milk samples as a prior step to their determination by spectrophotometry. In this work, non-ionic silicone surfactant, also known as 3-(3-hydroxypropyl-heptatrimethylxyloxane), was chosen as a green extraction solvent because of its structure and properties. The effect of different parameters, such as the type of surfactant, concentration and volume of surfactant, pH, salt, temperature, incubation time and water content on the cloud point extraction of carcinogenic pesticides such as atrazine and propazine, was studied in detail and a set of optimum conditions was established. A good correlation coefficient ( R 2 ) in the range of 0.991-0.997 for all calibration curves was obtained. The limit of detection was 1.06 µg l -1 (atrazine) and 1.22 µg l -1 (propazine), and the limit of quantitation was 3.54 µg l -1 (atrazine) and 4.07 µg l -1 (propazine). Satisfactory recoveries in the range of 81-108% were determined in milk samples at 5 and 1000 µg l -1 , respectively, with low relative standard deviation, n = 3 of 0.301-7.45% in milk matrices. The proposed method is very convenient, rapid, cost-effective and environmentally friendly for food analysis.
Influence of crisp values on the object-based data extraction procedure from LiDAR data
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Rousell, Adam
2014-05-01
Nowadays a plethora of approaches attempt to automate the process of object extraction from LiDAR data. However, the majority of these methods require the fusion of the LiDAR dataset with other information such as photogrammetric imagery. The approach that has been used as the basis for this paper is a novel method which makes use of human knowledge and the CNL modelling language to automatically extract buildings solely from LiDAR point cloud data in a transferable method. A number of rules are implemented to generate an artificial intelligence algorithm which is used for the object extraction. Although the single dataset method has been found to successfully extract building footprints from the point cloud dataset, at this initial stage it has one restriction that may limit its effectiveness - a number of the rules that are used are based on crisp boundary values. If, for example, the slope of the ground surface is used as a rule for determining objects then the slope value of a pixel would be assessed to determine if it is suitable for a building structure. This check would be performed by identifying whether the slope value is less than or greater than a threshold value. However, in reality such a crisp classification process is likely not to be a true reflection of real world scenarios. For example, using the crisp methods a difference of 1° in slope could result in one region in a dataset being deemed suitable and its neighboring region being seen as not suitable. It is likely however that there is in reality little difference in the actual suitability of these two neighboring regions. A more suitable classification process may be the use of fuzzy set theory whereby each region is seen as having degree of membership to a number of sets (or classifications). In the above example, the two regions would likely be seen as having very similar membership values to the different sets, although this is obviously dependent on factors such as the extent of each region. The purpose of this study is to identify to what extent the use of explicit boundary values has on the overall building footprint dataset extracted. By performing the analysis multiple times using differing threshold values for rules, it is possible to compare the resultant datasets and thus identify the impact of using such classification procedures. If a significant difference is found between the resultant datasets, this would highlight that the use of such crisp methods in the extraction processes may not be optimal and that a future enhancement to the method would be to consider the use of fuzzy classification methods.
Pairwise contact energy statistical potentials can help to find probability of point mutations.
Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S
2017-01-01
To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Topology of Three-Dimensional Symmetric Tensor Fields
NASA Technical Reports Server (NTRS)
Lavin, Yingmei; Levy, Yuval; Hesselink, Lambertus
1994-01-01
We study the topology of 3-D symmetric tensor fields. The goal is to represent their complex structure by a simple set of carefully chosen points and lines analogous to vector field topology. The basic constituents of tensor topology are the degenerate points, or points where eigenvalues are equal to each other. First, we introduce a new method for locating 3-D degenerate points. We then extract the topological skeletons of the eigenvector fields and use them for a compact, comprehensive description of the tensor field. Finally, we demonstrate the use of tensor field topology for the interpretation of the two-force Boussinesq problem.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
Hodek, Ondřej; Křížek, Tomáš; Coufal, Pavel; Ryšlavá, Helena
2017-03-01
In this study, we optimized a method for the determination of free amino acids in Nicotiana tabacum leaves. Capillary electrophoresis with contactless conductivity detector was used for the separation of 20 proteinogenic amino acids in acidic background electrolyte. Subsequently, the conditions of extraction with HCl were optimized for the highest extraction yield of the amino acids because sample treatment of plant materials brings some specific challenges. Central composite face-centered design with fractional factorial design was used in order to evaluate the significance of selected factors (HCl volume, HCl concentration, sonication, shaking) on the extraction process. In addition, the composite design helped us to find the optimal values for each factor using the response surface method. The limits of detection and limits of quantification for the 20 proteinogenic amino acids were found to be in the order of 10 -5 and 10 -4 mol l -1 , respectively. Addition of acetonitrile to the sample was tested as a method commonly used to decrease limits of detection. Ambiguous results of this experiment pointed out some features of plant extract samples, which often required specific approaches. Suitability of the method for metabolomic studies was tested by analysis of a real sample, in which all amino acids, except for L-methionine and L-cysteine, were successfully detected. The optimized extraction process together with the capillary electrophoresis method can be used for the determination of proteinogenic amino acids in plant materials. The resulting inexpensive, simple, and robust method is well suited for various metabolomic studies in plants. As such, the method represents a valuable tool for research and practical application in the fields of biology, biochemistry, and agriculture.
NASA Astrophysics Data System (ADS)
Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.
2017-11-01
The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.
HerMES: point source catalogues from Herschel-SPIRE observations II
NASA Astrophysics Data System (ADS)
Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.
2014-11-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).
Quantitative 3D reconstruction of airway and pulmonary vascular trees using HRCT
NASA Astrophysics Data System (ADS)
Wood, Susan A.; Hoford, John D.; Hoffman, Eric A.; Zerhouni, Elias A.; Mitzner, Wayne A.
1993-07-01
Accurate quantitative measurements of airway and vascular dimensions are essential to evaluate function in the normal and diseased lung. In this report, a novel method is described for three-dimensional extraction and analysis of pulmonary tree structures using data from High Resolution Computed Tomography (HRCT). Serially scanned two-dimensional slices of the lower left lobe of isolated dog lungs were stacked to create a volume of data. Airway and vascular trees were three-dimensionally extracted using a three dimensional seeded region growing algorithm based on difference in CT number between wall and lumen. To obtain quantitative data, we reduced each tree to its central axis. From the central axis, branch length is measured as the distance between two successive branch points, branch angle is measured as the angle produced by two daughter branches, and cross sectional area is measured from a plane perpendicular to the central axis point. Data derived from these methods can be used to localize and quantify structural differences both during changing physiologic conditions and in pathologic lungs.
Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F
2009-05-01
Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Robust Point Set Matching for Partial Face Recognition.
Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng
2016-03-01
Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.
The Design of Case Products’ Shape Form Information Database Based on NURBS Surface
NASA Astrophysics Data System (ADS)
Liu, Xing; Liu, Guo-zhong; Xu, Nuo-qi; Zhang, Wei-she
2017-07-01
In order to improve the computer design of product shape design,applying the Non-uniform Rational B-splines(NURBS) of curves and surfaces surface to the representation of the product shape helps designers to design the product effectively.On the basis of the typical product image contour extraction and using Pro/Engineer(Pro/E) to extract the geometric feature of scanning mold,in order to structure the information data base system of value point,control point and node vector parameter information,this paper put forward a unified expression method of using NURBS curves and surfaces to describe products’ geometric shape and using matrix laboratory(MATLAB) to simulate when products have the same or similar function.A case study of electric vehicle’s front cover illustrates the access process of geometric shape information of case product in this paper.This method can not only greatly reduce the capacity of information debate,but also improve the effectiveness of computer aided geometric innovation modeling.
A hierarchical methodology for urban facade parsing from TLS point clouds
NASA Astrophysics Data System (ADS)
Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao
2017-01-01
The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.
NASA Astrophysics Data System (ADS)
Wei, Qiangding; Shi, Fei; Zhu, Weifang; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian
2017-02-01
In this paper, we propose a 3D registration method for retinal optical coherence tomography (OCT) volumes. The proposed method consists of five main steps: First, a projection image of the 3D OCT scan is created. Second, the vessel enhancement filter is applied on the projection image to detect vessel shadow. Third, landmark points are extracted based on both vessel positions and layer information. Fourth, the coherent point drift method is used to align retinal OCT volumes. Finally, a nonrigid B-spline-based registration method is applied to find the optimal transform to match the data. We applied this registration method on 15 3D OCT scans of patients with Choroidal Neovascularization (CNV). The Dice coefficients (DSC) between layers are greatly improved after applying the nonrigid registration.
Yokoi, Michinori; Shimoda, Mitsuya
2017-03-01
A low-density polyethylene (LDPE) membrane pouch method was developed to extract volatile flavor compounds from tobacco leaf. Tobacco leaf suspended in water was enclosed in a pouch prepared from a LDPE membrane of specific gravity 0.92 g/cm3 and 0.03 mm thickness and then extracted with diethyl ether. In comparison with direct solvent extraction, LDPE membrane excluded larger and higher boiling point compounds which could contaminate a gas chromatograph inlet and damage a column. Whilst being more convenient than a reduced-pressure steam distillation, it could extract volatile flavor compounds of wide range of molecular weight and polarity. Repeatabilities in the extracted amounts were ranged from 0.38% of 2.3-bipyridyl to 26% of β-ionone, and average value of 39 compounds was 5.9%. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Noninvasive extraction of fetal electrocardiogram based on Support Vector Machine
NASA Astrophysics Data System (ADS)
Fu, Yumei; Xiang, Shihan; Chen, Tianyi; Zhou, Ping; Huang, Weiyan
2015-10-01
The fetal electrocardiogram (FECG) signal has important clinical value for diagnosing the fetal heart diseases and choosing suitable therapeutics schemes to doctors. So, the noninvasive extraction of FECG from electrocardiogram (ECG) signals becomes a hot research point. A new method, the Support Vector Machine (SVM) is utilized for the extraction of FECG with limited size of data. Firstly, the theory of the SVM and the principle of the extraction based on the SVM are studied. Secondly, the transformation of maternal electrocardiogram (MECG) component in abdominal composite signal is verified to be nonlinear and fitted with the SVM. Then, the SVM is trained, and the training results are compared with the real data to ensure the effect of the training. Meanwhile, the parameters of the SVM are optimized to achieve the best performance so that the learning machine can be utilized to fit the unknown samples. Finally, the FECG is extracted by removing the optimal estimation of MECG component from the abdominal composite signal. In order to evaluate the performance of FECG extraction based on the SVM, the Signal-to-Noise Ratio (SNR) and the visual test are used. The experimental results show that the FECG with good quality can be extracted, its SNR ratio is significantly increased as high as 9.2349 dB and the time cost is significantly decreased as short as 0.802 seconds. Compared with the traditional method, the noninvasive extraction method based on the SVM has a simple realization, the shorter treatment time and the better extraction quality under the same conditions.
Airway Tree Segmentation in Serial Block-Face Cryomicrotome Images of Rat Lungs
Bauer, Christian; Krueger, Melissa A.; Lamm, Wayne J.; Smith, Brian J.; Glenny, Robb W.; Beichel, Reinhard R.
2014-01-01
A highly-automated method for the segmentation of airways in serial block-face cryomicrotome images of rat lungs is presented. First, a point inside of the trachea is manually specified. Then, a set of candidate airway centerline points is automatically identified. By utilizing a novel path extraction method, a centerline path between the root of the airway tree and each point in the set of candidate centerline points is obtained. Local disturbances are robustly handled by a novel path extraction approach, which avoids the shortcut problem of standard minimum cost path algorithms. The union of all centerline paths is utilized to generate an initial airway tree structure, and a pruning algorithm is applied to automatically remove erroneous subtrees or branches. Finally, a surface segmentation method is used to obtain the airway lumen. The method was validated on five image volumes of Sprague-Dawley rats. Based on an expert-generated independent standard, an assessment of airway identification and lumen segmentation performance was conducted. The average of airway detection sensitivity was 87.4% with a 95% confidence interval (CI) of (84.9, 88.6)%. A plot of sensitivity as a function of airway radius is provided. The combined estimate of airway detection specificity was 100% with a 95% CI of (99.4, 100)%. The average number and diameter of terminal airway branches was 1179 and 159 μm, respectively. Segmentation results include airways up to 31 generations. The regression intercept and slope of airway radius measurements derived from final segmentations were estimated to be 7.22 μm and 1.005, respectively. The developed approach enables quantitative studies of physiology and lung diseases in rats, requiring detailed geometric airway models. PMID:23955692
Resin purification from Dragons Blood by using sub critical solvent extraction method
NASA Astrophysics Data System (ADS)
Saifuddin; Nahar
2018-04-01
Jernang resin (dragon blood) is the world's most expensive sap. The resin obtained from jernang that grows only on the islands of Sumatra and Borneo. Jernang resin is in demand by the State of China, Hong Kong, and Singapore since they contain compounds that have the potential dracohordin as a medicinal ingredient in the biological and pharmacological activity such as antimicrobial, antiviral, antitumor and cytotoxic activity. The resin extracting process has conventionally been done by drizzly with maceration method as one way of processing jernang, which is done by people in Bireuen, Aceh. However, there are still significant obstacles, namely the quality of the yield that obtained lower than the jernang resin. The technological innovation carried out by forceful extraction process maceration by using methanol produced a yield that is higher than the extraction process maceration method carried out in Bireuen. Nevertheless, the use of methanol as a solvent would raise the production costs due to the price, which is relatively more expensive and non-environmentally friendly. To overcome the problem, this research proposed a process, which is known as subcritical solvent method. This process is cheap, and also abundant and environmentally friendly. The results show that the quality of jernang resins is better than the one that obtained by the processing group in Bireuen. The quality of the obtained jernang by maceration method is a class-A quality based on the quality specification requirements of jernang (SNI 1671: 2010) that has resin (b/b) 73%, water (w/w) of 6.8%, ash (w/b) 7%, impurity (w/w) 32%, the melting point of 88°C and red colours. While the two-stage treatment obtained a class between class-A and super quality, with the resin (b/b) 0.86%, water (w/w) of 6.5%, ash (w/w) of 2.8%, levels of impurities (w/w) of 9%, the melting point of 88 °C and dark-red colours.
Automatic lung nodule matching for the follow-up in temporal chest CT scans
NASA Astrophysics Data System (ADS)
Hong, Helen; Lee, Jeongjin; Shin, Yeong Gil
2006-03-01
We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method. Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance (AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.
Sun, Chenglu; Li, Wei; Chen, Wei
2017-01-01
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188
Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area
NASA Astrophysics Data System (ADS)
Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.
2018-04-01
Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.
NASA Astrophysics Data System (ADS)
Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.
2017-01-01
Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.
Comparison of Moringa Oleifera seeds oil characterization produced chemically and mechanically
NASA Astrophysics Data System (ADS)
Eman, N. A.; Muhamad, K. N. S.
2016-06-01
It is established that virtually every part of the Moringa oleifera tree (leaves, stem, bark, root, flowers, seeds, and seeds oil) are beneficial in some way with great benefits to human being. The tree is rich in proteins, vitamins, minerals. All Moringa oleifera food products have a very high nutritional value. They are eaten directly as food, as supplements, and as seasonings as well as fodder for animals. The purpose of this research is to investigate the effect of seeds particle size on oil extraction using chemical method (solvent extraction). Also, to compare Moringa oleifera seeds oil properties which are produced chemically (solvent extraction) and mechanically (mechanical press). The Moringa oleifera seeds were grinded, sieved, and the oil was extracted using soxhlet extraction technique with n-Hexane using three different size of sample (2mm, 1mm, and 500μm). The average oil yield was 36.1%, 40.80%, and 41.5% for 2mm, 1mm, and 500μm particle size, respectively. The properties of Moringa oleifera seeds oil were: density of 873 kg/m3, and 880 kg/m3, kinematic viscosity of 42.2mm2/s and 9.12mm2/s for the mechanical and chemical method, respectively. pH, cloud point and pour point were same for oil produced with both methods which is 6, 18°C and 12°C, respectively. For the fatty acids, the oleic acid is present with high percentage of 75.39%, and 73.60% from chemical and mechanical method, respectively. Other fatty acids are present as well in both samples which are (Gadoleic acid, Behenic acid, Palmitic acid) which are with lower percentage of 2.54%, 5.83%, and 5.73%, respectively in chemical method oil, while they present as 2.40%, 6.73%, and 6.04%, respectively in mechanical method oil. In conclusion, the results showed that both methods can produce oil with high quality. Moringa oleifera seeds oil appear to be an acceptable good source for oil rich in oleic acid which is equal to olive oil quality, that can be consumed in Malaysia where the olive oil is imported with high prices. In the same time cultivation of Moringa oleifera tree is considered to be a new source of income for the country and give more job opportunities.
Canola Proteins for Human Consumption: Extraction, Profile, and Functional Properties
Tan, Siong H; Mailer, Rodney J; Blanchard, Christopher L; Agboola, Samson O
2011-01-01
Canola protein isolate has been suggested as an alternative to other proteins for human food use due to a balanced amino acid profile and potential functional properties such as emulsifying, foaming, and gelling abilities. This is, therefore, a review of the studies on the utilization of canola protein in human food, comprising the extraction processes for protein isolates and fractions, the molecular character of the extracted proteins, as well as their food functional properties. A majority of studies were based on proteins extracted from the meal using alkaline solution, presumably due to its high nitrogen yield, followed by those utilizing salt extraction combined with ultrafiltration. Characteristics of canola and its predecessor rapeseed protein fractions such as nitrogen yield, molecular weight profile, isoelectric point, solubility, and thermal properties have been reported and were found to be largely related to the extraction methods. However, very little research has been carried out on the hydrophobicity and structure profiles of the protein extracts that are highly relevant to a proper understanding of food functional properties. Alkaline extracts were generally not very suitable as functional ingredients and contradictory results about many of the measured properties of canola proteins, especially their emulsification tendencies, have also been documented. Further research into improved extraction methods is recommended, as is a more systematic approach to the measurement of desired food functional properties for valid comparison between studies. PMID:21535703
Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata
2015-03-15
Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyai, K.; Oura, T.; Kawashima, M.
1978-11-01
A simple and reliable method of paired TSH assay was developed and used in screening for neonatal primary hypothyroidism. In this method, a paired assay is first done. Equal parts of the extracts of dried blood spots on filter paper (9 mm diameter) from two infants 4 to 7 days old are combined and assayed for TSH by double antibody RIA. If the value obtained is over the cut-off point, the extracts are assayed separately for TSH in a second assay to identify the abnormal sample. Two systems, A and B, with different cut-off points were tested. On the basismore » of reference blood samples (serum levels of TSH, 80 ..mu..U/ml in system A and 40 ..mu..U/ml in system B), the cut-off point was selected as follows: upper 5 (A) or 4 (B) percentile in the paired assay and values of reference blood samples in the second individual assay. Four cases (2 in A and 2 in B) of neonatal primary hypothyroidism were found among 25 infants (23 in A and 2 in B) who were recalled from a general population of 41,400 infants (24,200 in A and 17,200 in B) by 22,700 assays. This paired TSH assay system saves labor and expense for screening neonatal hypothyroidism.« less
Terrestrial laser scanning for geometry extraction and change monitoring of rubble mound breakwaters
NASA Astrophysics Data System (ADS)
Puente, I.; Lindenbergh, R.; González-Jorge, H.; Arias, P.
2014-05-01
Rubble mound breakwaters are coastal defense structures that protect harbors and beaches from the impacts of both littoral drift and storm waves. They occasionally break, leading to catastrophic damage to surrounding human populations and resulting in huge economic and environmental losses. Ensuring their stability is considered to be of vital importance and the major reason for setting up breakwater monitoring systems. Terrestrial laser scanning has been recognized as a monitoring technique of existing infrastructures. Its capability for measuring large amounts of accurate points in a short period of time is also well proven. In this paper we first introduce a method for the automatic extraction of face geometry of concrete cubic blocks, as typically used in breakwaters. Point clouds are segmented based on their orientation and location. Then we compare corresponding cuboids of three co-registered point clouds to estimate their transformation parameters over time. The first method is demonstrated on scan data from the Baiona breakwater (Spain) while the change detection is demonstrated on repeated scan data of concrete bricks, where the changing scenario was simulated. The application of the presented methodology has verified its effectiveness for outlining the 3D breakwater units and analyzing their changes at the millimeter level. Breakwater management activities could benefit from this initial version of the method in order to improve their productivity.
Design and control of active vision based mechanisms for intelligent robots
NASA Technical Reports Server (NTRS)
Wu, Liwei; Marefat, Michael M.
1994-01-01
In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.
Slicing Method for curved façade and window extraction from point clouds
NASA Astrophysics Data System (ADS)
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image
NASA Astrophysics Data System (ADS)
Demir, N.; Kaynarca, M.; Oy, S.
2016-06-01
Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.
ERIC Educational Resources Information Center
Johnson, Eric R.
1988-01-01
Describes a laboratory experiment that measures the amount of ascorbic acid destroyed by food preparation methods (boiling and steaming). Points out that aqueous extracts of cooked green pepper samples can be analyzed for ascorbic acid by a relatively simple redox titration. Lists experimental procedure for four methods of preparation. (MVL)
Togola, Anne; Coureau, Charlotte; Guezennec, Anne-Gwenaëlle; Touzé, Solène
2015-05-01
The presence of acrylamide in natural systems is of concern from both environmental and health points of view. We developed an accurate and robust analytical procedure (offline solid phase extraction combined with UPLC/MS/MS) with a limit of quantification (20 ng L(-1)) compatible with toxicity threshold values. The optimized (considering the nature of extraction phases, sampling volumes, and solvent of elution) solid phase extraction (SPE) was validated according to ISO Standard ISO/IEC 17025 on groundwater, surface water, and industrial process water samples. Acrylamide is highly polar, which induces a high variability during the SPE step, therefore requiring the use of C(13)-labeled acrylamide as an internal standard to guarantee the accuracy and robustness of the method (uncertainty about 25 % (k = 2) at limit of quantification level). The specificity of the method and the stability of acrylamide were studied for these environmental media, and it was shown that the method is suitable for measuring acrylamide in environmental studies.
NASA Astrophysics Data System (ADS)
Yang, Honggang; Lin, Huibin; Ding, Kang
2018-05-01
The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.
MICROWAVE-ASSISTED EXTRACTION OF PHENOLIC COMPOUNDS FROM POLYGONUM MULTIFLORUM THUNB. ROOTS.
Quoc, Le Pham Tan; Muoi, Nguyen Van
2016-01-01
The aim of this study was to determine the best extraction conditions for total phenolic content (TPC) and antioxidant capacity (AC) of Polygonum multiflorum Thunb. root using microwave-assisted extraction (MAE). The raw material used was Polygonum multiflorum Thunb. root powder. Five factors such as solvent type, solvent concentrations, solvent/material ratio, extraction time and microwave power were studied; TPC and AC values were determined by the Folin-Ciocalteu method and DPPH free radical scavenging activity measurement, respectively. In addition, studies involved assaying the HPLC test of extracts and SEM of samples. Optimal results pointed to acetone as the solvent, acetone concentration of 60%, solvent/material ratio of 40/1 (v/w), extraction time of 5 mins and microwave power of 127 W. TPC and AC obtained were approximates 44.3 ±0.13 mg GAE/g DW and 341.26 ±1.54 μmol TE/g DW, respectively. The effect of microwaving on the cell destruction of Polygonum multiflorum Thunb. root was observed by scanning electron microscopy (SEM). Some phenolic compounds were determined by the HPLC method, for instance, gallic acid, catechin and resveratrol. These factors significantly affected TPC and AC. We can use acetone as a solvent with microwave-assisted extraction to achieve the best result.
Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m2, 5074.1790 m3 and 1316.1250 m2, 1591.5784 m3, respectively. The results of the study provide a new method for the quantitative study of small gully erosion. PMID:24616626
NASA Astrophysics Data System (ADS)
Kassem, Mohammed A.; Amin, Alaa S.
2015-02-01
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.
Small target detection using objectness and saliency
NASA Astrophysics Data System (ADS)
Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao
2017-10-01
We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.
Uddin, Md Salim; Sarker, Md Zaidul Islam; Ferdosh, Sahena; Akanda, Md Jahurul Haque; Easmin, Mst Sabina; Bt Shamsudin, Siti Hadijah; Bin Yunus, Kamaruzzaman
2015-05-01
Phytosterols provide important health benefits: in particular, the lowering of cholesterol. From environmental and commercial points of view, the most appropriate technique has been searched for extracting phytosterols from plant matrices. As a green technology, supercritical fluid extraction (SFE) using carbon dioxide (CO2) is widely used to extract bioactive compounds from different plant matrices. Several studies have been performed to extract phytosterols using supercritical CO2 (SC-CO2) and this technology has clearly offered potential advantages over conventional extraction methods. However, the efficiency of SFE technology fully relies on the processing parameters, chemistry of interest compounds, nature of the plant matrices and expertise of handling. This review covers SFE technology with particular reference to phytosterol extraction using SC-CO2. Moreover, the chemistry of phytosterols, properties of supercritical fluids (SFs) and the applied experimental designs have been discussed for better understanding of phytosterol solubility in SC-CO2. © 2014 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Liu, J. H.; Hu, J.; Li, Z. W.
2018-04-01
Three-dimensional (3-D) deformation fields with respect to the October 2016's Central Tottori earthquake are extracted in this paper from ALOS-2 conducted Interferometric Synthetic Aperture Radar (InSAR) observations with four different incline angles, i.e., ascending/descending and left-/right-looking. In particular, the Strain Model and Variance Component Estimation (SM-VCE) method is developed to integrate the heterogeneous InSAR observations without being affected by the coverage inconformity of SAR images associated with the earthquake focal area. Compare with classical weighted least squares (WLS) method, SM-VCE method is capable for the retrieval of more accurate and complete deformation field of Central Tottori earthquake, as indicated by the comparison with the GNSS observations. In addition, accuracies of heterogeneous InSAR observations and 3-D deformations on each point are quantitatively provided by the SM-VCE method.
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging
NASA Astrophysics Data System (ADS)
Lin, Bingxiong; Sun, Yu; Qian, Xiaoning
2013-03-01
Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.
Pippi, Roberto
2013-01-01
Summary Aim The primary aim of the present study was to validate the effectiveness of a personalized device able to guide periodontal probing in evaluation of second molar periodontal healing after adjacent third molar surgical extraction. Secondarily, the study analyzed if any patient and tooth related factors affected the second molar periodontal healing as well as if they were able to affect the periodontal probing depth performed with or without the personalized device. Materials and methods Thirty-five lower second molars were evaluated after extraction of the adjacent third molar. Pre-operative as well as 3 and 12 month post-operative probing depths of the distal surface of the second molar were evaluated. All measurements were taken by two different methods: standard two-point and four-point probing using a personalized onlay-type guide. Periapical radiographs were also evaluated. The Pearson product moment and the general linear model with backward stepwise procedure were used for inferential statistics. Results The mean 12-month post-operative probing depth/mean pre-operative probing depth ratio obtained with the guided probing method showed a highly significant effect on the 12-month radiographic post-operative/pre-operative radiographic measure ratio. None of the examined patient- or tooth-related factors showed a significant effect on pre-operative/12-month post-operative radiographic measure ratio. Conclusions The use of the proposed personalized device seems to provide a more reliable estimate of second molar periodontal healing after adjacent third molar surgical extraction. No patient-or tooth-related factors seem to be able to affect either second molar periodontal healing or probing depth measures obtained with or without the personalized device in individuals younger than 25 years old. It can be therefore recommended that lower third molar surgical extraction be performed in young adults. PMID:24611086
Mechanical properties of sol–gel derived SiO2 nanotubes
Antsov, Mikk; Vlassov, Sergei; Dorogin, Leonid M; Vahtrus, Mikk; Zabels, Roberts; Lange, Sven; Lõhmus, Rünno
2014-01-01
Summary The mechanical properties of thick-walled SiO2 nanotubes (NTs) prepared by a sol–gel method while using Ag nanowires (NWs) as templates were measured by using different methods. In situ scanning electron microscopy (SEM) cantilever beam bending tests were carried out by using a nanomanipulator equipped with a force sensor in order to investigate plasticity and flexural response of NTs. Nanoindentation and three point bending tests of NTs were performed by atomic force microscopy (AFM) under ambient conditions. Half-suspended and three-point bending tests were processed in the framework of linear elasticity theory. Finite element method simulations were used to extract Young’s modulus values from the nanoindentation data. Finally, the Young’s moduli of SiO2 NTs measured by different methods were compared and discussed. PMID:25383292
A Method for Automatic Extracting Intracranial Region in MR Brain Image
NASA Astrophysics Data System (ADS)
Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro
It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.
Marking Importance in Lectures: Interactive and Textual Orientation
ERIC Educational Resources Information Center
Deroey, Katrien L. B.
2015-01-01
This paper provides a comprehensive overview of lexicogrammatical markers of important lecture points and proposes a classification in terms of their interactive and textual orientation. The importance markers were extracted from the British Academic Spoken English corpus using corpus-driven and corpus-based methods. The classification is based on…
Facile silicification of plastic surface for bioassays
Hong, Seonki; Park, Ki Soo; Weissleder, Ralph; Castro, Cesar M.; Lee, Hakho
2017-01-01
We herein report a biomimetic technique to modify plastic substrates for bioassays. The method first places a polydopamine adhesion layer to plastic surface, and then grows conformal silica coating. As proof of principle, we coated plastic microbeads to construct a disposable filter for point-of-care nucleic acid extraction. PMID:28134385
Řezanka, Tomáš; Matoulková, Dagmar; Kolouchová, Irena; Masák, Jan; Viden, Ivan; Sigler, Karel
2015-05-01
The methods of preparation of fatty acids from brewer's yeast and its use in production of biofuels and in different branches of industry are described. Isolation of fatty acids from cell lipids includes cell disintegration (e.g., with liquid nitrogen, KOH, NaOH, petroleum ether, nitrogenous basic compounds, etc.) and subsequent processing of extracted lipids, including analysis of fatty acid and computing of biodiesel properties such as viscosity, density, cloud point, and cetane number. Methyl esters obtained from brewer's waste yeast are well suited for the production of biodiesel. All 49 samples (7 breweries and 7 methods) meet the requirements for biodiesel quality in both the composition of fatty acids and the properties of the biofuel required by the US and EU standards.
Halfon, Philippe; Ouzan, Denis; Khiri, Hacène; Pénaranda, Guillaume; Castellani, Paul; Oulès, Valerie; Kahloun, Asma; Amrani, Nolwenn; Fanteria, Lise; Martineau, Agnès; Naldi, Lou; Bourlière, Marc
2012-01-01
Background & Aims Point mutations in the coding region of the interleukin 28 gene (rs12979860) have recently been identified for predicting the outcome of treatment of hepatitis C virus infection. This polymorphism detection was based on whole blood DNA extraction. Alternatively, DNA for genetic diagnosis has been derived from buccal epithelial cells (BEC), dried blood spots (DBS), and genomic DNA from serum. The aim of the study was to investigate the reliability and accuracy of alternative routes of testing for single nucleotide polymorphism allele rs12979860CC. Methods Blood, plasma, and sera samples from 200 patients were extracted (400 µL). Buccal smears were tested using an FTA card. To simulate postal delay, we tested the influence of storage at ambient temperature on the different sources of DNA at five time points (baseline, 48 h, 6 days, 9 days, and 12 days) Results There was 100% concordance between blood, plasma, sera, and BEC, validating the use of DNA extracted from BEC collected on cytology brushes for genetic testing. Genetic variations in HPTR1 gene were detected using smear technique in blood smear (3620 copies) as well as in buccal smears (5870 copies). These results are similar to those for whole blood diluted at 1/10. A minimum of 0.04 µL, 4 µL, and 40 µL was necessary to obtain exploitable results respectively for whole blood, sera, and plasma. No significant variation between each time point was observed for the different sources of DNA. IL28B SNPs analysis at these different time points showed the same results using the four sources of DNA. Conclusion We demonstrated that genomic DNA extraction from buccal cells, small amounts of serum, and dried blood spots is an alternative to DNA extracted from peripheral blood cells and is helpful in retrospective and prospective studies for multiple genetic markers, specifically in hard-to-reach individuals. PMID:22412970
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-05
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO2(2+)-Sal1] and [UO2(2+)-Sal2]. Among them, [UO2(2+)-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO2(2+)-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO2(2+)-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57ngmL(-1), the linear regression equation was ΔF=438.0 c (ngmL(-1))+175.6 with the correlation coefficient r=0.9981. The limit of detection was 0.066ngmL(-1). The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Favre-Réguillon, Alain; Draye, Micheline; Lebuzit, Gérard; Thomas, Sylvie; Foos, Jacques; Cote, Gérard; Guy, Alain
2004-06-17
Cloud point extraction (CPE) was used to extract and separate lanthanum(III) and gadolinium(III) nitrate from an aqueous solution. The methodology used is based on the formation of lanthanide(III)-8-hydroxyquinoline (8-HQ) complexes soluble in a micellar phase of non-ionic surfactant. The lanthanide(III) complexes are then extracted into the surfactant-rich phase at a temperature above the cloud point temperature (CPT). The structure of the non-ionic surfactant, and the chelating agent-metal molar ratio are identified as factors determining the extraction efficiency and selectivity. In an aqueous solution containing equimolar concentrations of La(III) and Gd(III), extraction efficiency for Gd(III) can reach 96% with a Gd(III)/La(III) selectivity higher than 30 using Triton X-114. Under those conditions, a Gd(III) decontamination factor of 50 is obtained.
NASA Technical Reports Server (NTRS)
Page, Lance; Shen, C. N.
1991-01-01
This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.
He, Feng; Zeng, An-Ping
2006-01-01
Background The increasing availability of time-series expression data opens up new possibilities to study functional linkages of genes. Present methods used to infer functional linkages between genes from expression data are mainly based on a point-to-point comparison. Change trends between consecutive time points in time-series data have been so far not well explored. Results In this work we present a new method based on extracting main features of the change trend and level of gene expression between consecutive time points. The method, termed as trend correlation (TC), includes two major steps: 1, calculating a maximal local alignment of change trend score by dynamic programming and a change trend correlation coefficient between the maximal matched change levels of each gene pair; 2, inferring relationships of gene pairs based on two statistical extraction procedures. The new method considers time shifts and inverted relationships in a similar way as the local clustering (LC) method but the latter is merely based on a point-to-point comparison. The TC method is demonstrated with data from yeast cell cycle and compared with the LC method and the widely used Pearson correlation coefficient (PCC) based clustering method. The biological significance of the gene pairs is examined with several large-scale yeast databases. Although the TC method predicts an overall lower number of gene pairs than the other two methods at a same p-value threshold, the additional number of gene pairs inferred by the TC method is considerable: e.g. 20.5% compared with the LC method and 49.6% with the PCC method for a p-value threshold of 2.7E-3. Moreover, the percentage of the inferred gene pairs consistent with databases by our method is generally higher than the LC method and similar to the PCC method. A significant number of the gene pairs only inferred by the TC method are process-identity or function-similarity pairs or have well-documented biological interactions, including 443 known protein interactions and some known cell cycle related regulatory interactions. It should be emphasized that the overlapping of gene pairs detected by the three methods is normally not very high, indicating a necessity of combining the different methods in search of functional association of genes from time-series data. For a p-value threshold of 1E-5 the percentage of process-identity and function-similarity gene pairs among the shared part of the three methods reaches 60.2% and 55.6% respectively, building a good basis for further experimental and functional study. Furthermore, the combined use of methods is important to infer more complete regulatory circuits and network as exemplified in this study. Conclusion The TC method can significantly augment the current major methods to infer functional linkages and biological network and is well suitable for exploring temporal relationships of gene expression in time-series data. PMID:16478547
Romarís-Hortas, Vanessa; Moreda-Piñeiro, Antonio; Bermejo-Barrera, Pilar
2009-08-15
The feasibility of microwave energy to assist the solubilisation of edible seaweed samples by tetramethylammonium hydroxide (TMAH) has been investigated to extract iodine and bromine. Inductively coupled plasma-mass spectrometry (ICP-MS) has been used as a multi-element detector. Variables affecting the microwave assisted extraction/solubilisation (temperature, TMAH volume, ramp time and hold time) were firstly screened by applying a fractional factorial design (2(5-1)+2), resolution V and 2 centre points. When extracting both halogens, results showed statistical significance (confidence interval of 95%) for TMAH volume and temperature, and also for the two order interaction between both variables. Therefore, these two variables were finally optimized by a 2(2)+star orthogonal central composite design with 5 centre points and 2 replicates, and optimum values of 200 degrees C and 10 mL for temperature and TMAH volume, respectively, were found. The extraction time (ramp and hold times) was found statistically non-significant, and values of 10 and 5 min were chosen for the ramp time and the hold time, respectively. This means a fast microwave heating cycle. Repeatability of the over-all procedure has been found to be 6% for both elements, while iodine and bromine concentrations of 24.6 and 19.9 ng g(-1), respectively, were established for the limit of detection. Accuracy of the method was assessed by analyzing the NIES-09 (Sargasso, Sargassum fulvellum) certified reference material (CRM) and the iodine and bromine concentrations found have been in good agreement with the indicative values for this CRM. Finally, the method was applied to several edible dried and canned seaweed samples.
Pippi, Roberto
2013-01-01
The primary aim of the present study was to validate the effectiveness of a personalized device able to guide periodontal probing in evaluation of second molar periodontal healing after adjacent third molar surgical extraction. Secondarily, the study analyzed if any patient and tooth related factors affected the second molar periodontal healing as well as if they were able to affect the periodontal probing depth performed with or without the personalized device. Thirty-five lower second molars were evaluated after extraction of the adjacent third molar. Pre-operative as well as 3 and 12 month post-operative probing depths of the distal surface of the second molar were evaluated. All measurements were taken by two different methods: standard two-point and four-point probing using a personalized onlay-type guide. Periapical radiographs were also evaluated. The Pearson product moment and the general linear model with backward stepwise procedure were used for inferential statistics. The mean 12-month post-operative probing depth/mean pre-operative probing depth ratio obtained with the guided probing method showed a highly significant effect on the 12-month radiographic post-operative/pre-operative radiographic measure ratio. None of the examined patient- or tooth-related factors showed a significant effect on pre-operative/12-month post-operative radiographic measure ratio. The use of the proposed personalized device seems to provide a more reliable estimate of second molar periodontal healing after adjacent third molar surgical extraction. No patient-or tooth-related factors seem to be able to affect either second molar periodontal healing or probing depth measures obtained with or without the personalized device in individuals younger than 25 years old. It can be therefore recommended that lower third molar surgical extraction be performed in young adults.
Size analysis of nanoparticles extracted from W/O emulsions.
Nagelreiter, C; Kotisch, H; Heuser, T; Valenta, C
2015-07-05
Nanosized particles are frequently used in many different applications, especially TiO2 nanoparticles as physical filters in sunscreens to protect the skin from UV radiation. However, concerns have arisen about possible health issues caused by nanoparticles and therefore, the assessment of the occurrence of nanoparticles is important in pharmaceutical and cosmetic formulations. In a previous work of our group, a method was presented to extract nanoparticles from O/W emulsions. But to respond to the needs of dry and sensitive skin, sunscreens of the water-in-oil emulsion type are available. In these, assessment of present nanoparticles is also an important issue, so the present study offers a method for extracting nanoparticles from W/O emulsions. Both methods emanate from the same starting point, which minimizes both effort and cost before the beginning of the assessment. By addition of NaOH pellets and centrifugation, particles were extracted from W/O emulsions and measured for their size and surface area by laser diffraction. With the simple equation Q=A/S a distinction between nanoparticles and microparticles was achieved in W/O emulsions, even in commercially available samples. The present method is quick and easy to implement, which makes it cost-effective. Copyright © 2015 Elsevier B.V. All rights reserved.
Alpmann, Alexander; Morlock, Gertrud
2009-01-01
A new method has been developed for the determination of acrylamide in ground coffee by planar chromatography using prechromatographic in situ derivatization with dansulfinic acid. After pressurized fluid extraction of acrylamide from the coffee samples, the extracts were passed through activated carbon and concentrated. These extracts were applied onto a silica gel 60 HPTLC plate and oversprayed with dansulfinic acid. By heating the plate, acrylamide was derivatized into the fluorescent product dansylpropanamide. The chromatographic separation with ethyl acetate-tert.-butyl methyl ether (8 + 2, v/v) mobile phase was followed by densitometric quantification at 254/>400 nm using a 4 point calibration via the standard addition method over the whole system for which acrylamide was added at different concentrations at the beginning of the extraction process. The method was validated for commercial coffee. The linearity over the whole procedure showed determination coefficients between 0.9995 and 0.9825 (n = 6). Limit of quantitation at a signal-to-noise ratio of 10 was determined to be 48 microg/kg. The within-run precision (relative standard deviation, n = 6) of the chromatographic method was 3%. Commercial coffee samples analyzed showed acrylamide contents between 52 and 191 microg/kg, which was in correlation with amounts reported in previous publications.
McFall, Sally M; Wagner, Robin L; Jangam, Sujit R; Yamada, Douglas H; Hardie, Diana; Kelso, David M
2015-03-01
Early diagnosis and access to treatment for infants with human immunodeficiency virus-1 (HIV-1) is critical to reduce infant mortality. The lack of simple point-of-care tests impedes the timely initiation of antiretroviral therapy. The development of FINA, filtration isolation of nucleic acids, a novel DNA extraction method that can be performed by clinic personnel in less than 2 min has been reported previously. In this report, significant improvements in the DNA extraction and amplification methods are detailed that allow sensitive quantitation of as little as 10 copies of HIV-1 proviral DNA and detection of 3 copies extracted from 100 μl of whole blood. An internal control to detect PCR inhibition was also incorporated. In a preliminary field evaluation of 61 South African infants, the FINA test demonstrated 100% sensitivity and specificity. The proviral copy number of the infant specimens was quantified, and it was established that 100 microliters of whole blood is required for sensitive diagnosis of infants. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail
2015-08-01
The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Koide, Kaoru; Koike, Katsuaki
2012-10-01
This study developed a geobotanical remote sensing method for detecting high water table zones using differences in the conditions of forest trees induced by groundwater supply in a humid warm-temperate region. A new vegetation index (VI) termed added green band NDVI (AgbNDVI) was proposed to discriminate the differences. The AgbNDVI proved to be more sensitive to water stress on green vegetation than existing VIs, such as SAVI and EVI2, and possessed a strong linear correlation with the vegetation fraction. To validate a proposed vegetation index method, a 23 km2 study area was selected in the Tono region of Gifu prefecture, central Japan. The AgbNDVI values were calculated from atmospheric corrected SPOT HRV data. To correctly extract high VI points, the influence factors on forest tree growth were identified using the AgbNDVI values, DEM and forest type data; the study area was then divided into 555 domains chosen from a combination of the influence factors and forest types. Thresholds for extracting high VI points were defined for each domain based on histograms of AgbNDVI values. By superimposing the high VI points on topographic and geologic maps, most high VI points are clearly located on either concave or convex slopes, and are found to be proximal to geologic boundaries—particularly the boundary between the Pliocene gravel layer and the Cretaceous granite, which should act as a groundwater flow path. In addition, field investigations support the correctness of the high VI points, because they are located around groundwater seeps and in high water table zones where the growth increments and biomass of trees are greater than at low VI points.
Biologically active extracts with kidney affections applications
NASA Astrophysics Data System (ADS)
Pascu (Neagu), Mihaela; Pascu, Daniela-Elena; Cozea, Andreea; Bunaciu, Andrei A.; Miron, Alexandra Raluca; Nechifor, Cristina Aurelia
2015-12-01
This paper is aimed to select plant materials rich in bioflavonoid compounds, made from herbs known for their application performances in the prevention and therapy of renal diseases, namely kidney stones and urinary infections (renal lithiasis, nephritis, urethritis, cystitis, etc.). This paper presents a comparative study of the medicinal plant extracts composition belonging to Ericaceae-Cranberry (fruit and leaves) - Vaccinium vitis-idaea L. and Bilberry (fruit) - Vaccinium myrtillus L. Concentrated extracts obtained from medicinal plants used in this work were analyzed from structural, morphological and compositional points of view using different techniques: chromatographic methods (HPLC), scanning electronic microscopy, infrared, and UV spectrophotometry, also by using kinetic model. Liquid chromatography was able to identify the specific compounds of the Ericaceae family, present in all three extracts, arbutosid, as well as specific components of each species, mostly from the class of polyphenols. The identification and quantitative determination of the active ingredients from these extracts can give information related to their therapeutic effects.
Komaty, Sarah; Letertre, Marine; Dang, Huyen Duong; Jungnickel, Harald; Laux, Peter; Luch, Andreas; Carrié, Daniel; Merdrignac-Conanec, Odile; Bazureau, Jean-Pierre; Gauffre, Fabienne; Tomasi, Sophie; Paquin, Ludovic
2016-04-01
Lichens are symbiotic organisms known for producing unique secondary metabolites with attractive cosmetic and pharmacological properties. In this paper, we investigated three standard methods of preparation of Pseudevernia furfuracea (blender grinding, ball milling, pestle and mortar). The materials obtained were characterized by electronic microscopy, nitrogen adsorption and compared from the point of view of extraction. Their microscopic structure is related to extraction efficiency. In addition, it is shown using thalline reactions and mass spectrometry mapping (TOF-SIMS) that these metabolites are not evenly distributed throughout the organism. Particularly, atranorin (a secondary metabolite of interest) is mainly present in the cortex of P. furfuracea. Finally, using microwave assisted extraction (MAE) we obtained evidence that an appropriate preparation can increase the extraction efficiency of atranorin by a factor of five. Copyright © 2016 Elsevier B.V. All rights reserved.
ECG fiducial point extraction using switching Kalman filter.
Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian
2018-04-01
In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A
2018-01-01
In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.
Ramkumar, Abilasha; Ponnusamy, Vinoth Kumar; Jen, Jen-Fon
2012-08-15
The present study demonstrates a simple, rapid and efficient method for the determination of chlorinated anilines (CAs) in environmental water samples using ultrasonication assisted emulsification microextraction technique based on solidification of floating organic droplet (USAEME-SFO) coupled with high performance liquid chromatography-ultraviolet (HPLC-UV) detection. In this extraction method, 1-dodecanol was used as extraction solvent which is of lower density than water, low toxicity, low volatility, and low melting point (24 °C). After the USAEME, extraction solvent could be collected easily by keeping the extraction tube in ice bath for 2 min and the solidified organic droplet was scooped out using a spatula and transferred to another glass vial and allowed to thaw. Then, 10 μL of extraction solvent was diluted with mobile phase (1:1) and taken for HPLC-UV analysis. Parameters influencing the extraction efficiency, such as the kind and volume of extraction solvent, volume of sample, ultrasonication time, pH and salt concentration were thoroughly examined and optimized. Under the optimal conditions, the method showed good linearity in the concentration range of 0.05-500 ng mL(-1) with correlation coefficients ranging from 0.9948 to 0.9957 for the three target CAs. The limit of detection based on signal to noise ratio of 3 ranged from 0.01 to 0.1 ng mL(-1). The relative standard deviations (RSDs) varied from 2.1 to 6.1% (n=3) and the enrichment factors ranged from 44 to 124. The proposed method has also been successfully applied to analyze real water samples and the relative recoveries of environmental water samples ranged from 81.1 to 116.9%. Copyright © 2012 Elsevier B.V. All rights reserved.
The registration of non-cooperative moving targets laser point cloud in different view point
NASA Astrophysics Data System (ADS)
Wang, Shuai; Sun, Huayan; Guo, Huichao
2018-01-01
Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.
You, Xiangwei; Wang, Suli; Liu, Fengmao; Shi, Kaiwei
2013-07-26
A novel ultrasound-assisted surfactant-enhanced emulsification microextraction technique based on the solidification of a floating organic droplet followed by high performance liquid chromatography with diode array detection was developed for simultaneous determination of six fungicide residues in juices and red wine samples. The low-toxicity solvent, 1-dodecanol, was used as an extraction solvent. For its low density and proper melting point near room temperature, the extractant droplet was collected easily by solidifying it at a low temperature. The surfactant, Tween 80, was used as an emulsifier to enhance the dispersion of the water-immiscible extraction solvent into an aqueous phase, which hastened the mass-transfer of the analytes. Organic dispersive solvent typically required in common dispersive liquid-liquid microextraction methods was not used in the proposed method. Some parameters (e.g., the type and volume of extraction solvent, the type and concentration of surfactant, ultrasound extraction time, salt addition, and volume of samples) that affect the extraction efficiency were optimized. The proposed method showed a good linearity within the range of 5μgL(-1)-1000μgL(-1), with the correlation coefficients (γ) higher than 0.9969. The limits of detection for the method ranged from 0.4μgL(-1) to 1.4μgL(-1). Further, this simple, practical, sensitive, and environmentally friendly method was successfully applied to determine the target fungicides in juice and red wine samples. The recoveries of the target fungicides in red wine and fruit juice samples were 79.5%-113.4%, with relative standard deviations that ranged from 0.4% to 12.3%. Copyright © 2013 Elsevier B.V. All rights reserved.
Plant extract: a promising biomatrix for ecofriendly, controlled synthesis of silver nanoparticles.
Borase, Hemant P; Salunke, Bipinchandra K; Salunkhe, Rahul B; Patil, Chandrashekhar D; Hallsworth, John E; Kim, Beom S; Patil, Satish V
2014-05-01
Uses of plants extracts are found to be more advantageous over chemical, physical and microbial (bacterial, fungal, algal) methods for silver nanoparticles (AgNPs) synthesis. In phytonanosynthesis, biochemical diversity of plant extract, non-pathogenicity, low cost and flexibility in reaction parameters are accounted for high rate of AgNPs production with different shape, size and applications. At the same time, care has to be taken to select suitable phytofactory for AgNPs synthesis based on certain parameters such as easy availability, large-scale nanosynthesis potential and non-toxic nature of plant extract. This review focuses on synthesis of AgNPs with particular emphasis on biological synthesis using plant extracts. Some points have been given on selection of plant extract for AgNPs synthesis and case studies on AgNPs synthesis using different plant extracts. Reaction parameters contributing to higher yield of nanoparticles are presented here. Synthesis mechanisms and overview of present and future applications of plant-extract-synthesized AgNPs are also discussed here. Limitations associated with use of AgNPs are summarised in the present review.
New DTM Extraction Approach from Airborne Images Derived Dsm
NASA Astrophysics Data System (ADS)
Mousa, Y. A.; Helmholz, P.; Belton, D.
2017-05-01
In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.
Light extraction block with curved surface
Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.
2016-03-22
Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.
Study of in vitro antimicrobial and antiproliferative activities of selected Saharan plants.
Palici, Ionut F; Liktor-Busa, Erika; Zupkó, István; Touzard, Blaise; Chaieb, Mohamed; Urbán, Edit; Hohmann, Judit
2015-12-01
The aim of the present study was the evaluation of the antimicrobial and antiproliferative activities of selected Saharan species, which are applied in the traditional medicine but not studied thoroughly from chemical and pharmacological point of view. The studied plants, namely Anthyllis henoniana, Centropodia forskalii, Cornulaca monacantha, Ephedra alata var. alenda, Euphorbia guyoniana, Helianthemum confertum, Henophyton deserti, Moltkiopsis ciliata and Spartidium saharae were collected from remote areas of North Africa, especially from the Tunisian region of Sahara. After drying and applying the appropriate extraction methods, the plant extracts were tested in antimicrobial screening assay, performed on 19 Gram-positive and -negative strains of microbes. The inhibition zones produced by plant extracts were determined by disc-diffusion method. Remarkable antibacterial activities were exhibited by extracts of Ephedra alata var. alenda and Helianthemum confertum against B. subtilis, M. catarrhalis and methicillin-resistant and non-resistant S. aureus. Minimum inhibitory concentrations of these two species were also determined. Antiproliferative effects of the extracts were evaluated against 4 human adherent cell lines (HeLa, A431, A2780 and MCF7). Notable cell growth inhibition was found for extract of Helianthemum confertum and Euphorbia guyoniana. Our results provided data for selection of some plant species for further detailed pharmacological and phytochemical examinations.
Quispe-Fuentes, Issis; Vega-Gálvez, Antonio; Campos-Requena, Víctor H.
2017-01-01
The optimum conditions for the antioxidant extraction from maqui berry were determined using a response surface methodology. A three level D-optimal design was used to investigate the effects of three independent variables namely, solvent type (methanol, acetone and ethanol), solvent concentration and extraction time over total antioxidant capacity by using the oxygen radical absorbance capacity (ORAC) method. The D-optimal design considered 42 experiments including 10 central point replicates. A second-order polynomial model showed that more than 89% of the variation is explained with a satisfactory prediction (78%). ORAC values are higher when acetone was used as a solvent at lower concentrations, and the extraction time range studied showed no significant influence on ORAC values. The optimal conditions for antioxidant extraction obtained were 29% of acetone for 159 min under agitation. From the results obtained it can be concluded that the given predictive model describes an antioxidant extraction process from maqui berry.
Unsupervised Detection of Planetary Craters by a Marked Point Process
NASA Technical Reports Server (NTRS)
Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.
2011-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torcellini, Paul A.; Bonnema, Eric; Goldwasser, David
Building energy consumption can only be measured at the site or at the point of utility interconnection with a building. Often, to evaluate the total energy impact, this site-based energy consumption is translated into source energy, that is, the energy at the point of fuel extraction. Consistent with this approach, the U.S. Department of Energy's (DOE) definition of zero energy buildings uses source energy as the metric to account for energy losses from the extraction, transformation, and delivery of energy. Other organizations, as well, use source energy to characterize the energy impacts. Four methods of making the conversion from sitemore » energy to source energy were investigated in the context of the DOE definition of zero energy buildings. These methods were evaluated based on three guiding principles--improve energy efficiency, reduce and stabilize power demand, and use power from nonrenewable energy sources as efficiently as possible. This study examines relative trends between strategies as they are implemented on very low-energy buildings to achieve zero energy. A typical office building was modeled and variations to this model performed. The photovoltaic output that was required to create a zero energy building was calculated. Trends were examined with these variations to study the impacts of the calculation method on the building's ability to achieve zero energy status. The paper will highlight the different methods and give conclusions on the advantages and disadvantages of the methods studied.« less
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia
2016-06-01
To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.
Gupta, Shikha; Shanker, Karuna; Srivastava, Santosh K
2012-07-01
A new validated high-performance thin-layer chromatographic (HPTLC) method has been developed for the simultaneous quantitation of four antipsychotic indole alkaloids (IAs), reserpiline (RP, 1), α-yohimbine (YH, 2), isoreserpiline (IRP, 3) and 10-methoxy tetrahydroalstonine (MTHA, 4) as markers in the leaves of Rauwolfia tetraphylla. Extraction efficiency of the targeted IAs from the leaf matrix with organic and ecofriendly (green) solvents using percolation, ultrasonication and microwave techniques were studied. Non-ionic surfactants, viz. Triton X-100, Triton X-114 and Genapol X-80 were used for extraction and no back-extraction or liquid chromatographic steps were used to remove the targeted IAs from the surfactant-rich extractant phase. The optimized cloud point extraction was found a potentially useful methodology for the preconcentration of the targeted IAs. The separation was achieved on silica gel 60F(254) HPTLC plates using hexane-ethylacetate-methanol (5:4:1, v/v/v) as mobile phase. The quantitation of IAs (1-4) was carried out using the densitometric reflection/absorption mode at 520 nm after post chromatographic derivatization using Dragendorff's reagent. The method was validated for peak purity, precision, accuracy, robustness, limit of detection (LOD) and quantitation (LOQ). Method specificity was confirmed using retention factor (R(f)) and visible spectral (post chromatographic scan) correlation of marker compounds in the samples and standard tracks. Copyright © 2012 Elsevier B.V. All rights reserved.
Taheri, Salman; Jalali, Fahimeh; Fattahi, Nazir; Jalili, Ronak; Bahrami, Gholamreza
2015-10-01
Dispersive liquid-liquid microextraction based on solidification of floating organic droplet was developed for the extraction of methadone and determination by high-performance liquid chromatography with UV detection. In this method, no microsyringe or fiber is required to support the organic microdrop due to the usage of an organic solvent with a low density and appropriate melting point. Furthermore, the extractant droplet can be collected easily by solidifying it at low temperature. 1-Undecanol and methanol were chosen as extraction and disperser solvents, respectively. Parameters that influence extraction efficiency, i.e. volumes of extracting and dispersing solvents, pH, and salt effect, were optimized by using response surface methodology. Under optimal conditions, enrichment factor for methadone was 134 and 160 in serum and urine samples, respectively. The limit of detection was 3.34 ng/mmL in serum and 1.67 ng/mL in urine samples. Compared with the traditional dispersive liquid-liquid microextraction, the proposed method obtained lower limit of detection. Moreover, the solidification of floating organic solvent facilitated the phase transfer. And most importantly, it avoided using high-density and toxic solvents of traditional dispersive liquid-liquid microextraction method. The proposed method was successfully applied to the determination of methadone in serum and urine samples of an addicted individual under methadone therapy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Topic Transition in Educational Videos Using Visually Salient Words
ERIC Educational Resources Information Center
Gandhi, Ankit; Biswas, Arijit; Deshmukh, Om
2015-01-01
In this paper, we propose a visual saliency algorithm for automatically finding the topic transition points in an educational video. First, we propose a method for assigning a saliency score to each word extracted from an educational video. We design several mid-level features that are indicative of visual saliency. The optimal feature combination…
Xu, Lijun; Chen, Lulu; Li, Xiaolu; He, Tao
2014-10-01
In this paper, we propose a projective rectification method for infrared images obtained from the measurement of temperature distribution on an air-cooled condenser (ACC) surface by using projection profile features and cross-ratio invariability. In the research, the infrared (IR) images acquired by the four IR cameras utilized are distorted to different degrees. To rectify the distorted IR images, the sizes of the acquired images are first enlarged by means of bicubic interpolation. Then, uniformly distributed control points are extracted in the enlarged images by constructing quadrangles with detected vertical lines and detected or constructed horizontal lines. The corresponding control points in the anticipated undistorted IR images are extracted by using projection profile features and cross-ratio invariability. Finally, a third-order polynomial rectification model is established and the coefficients of the model are computed with the mapping relationship between the control points in the distorted and anticipated undistorted images. Experimental results obtained from an industrial ACC unit show that the proposed method performs much better than any previous method we have adopted. Furthermore, all rectified images are stitched together to obtain a complete image of the whole ACC surface with a much higher spatial resolution than that obtained by using a single camera, which is not only useful but also necessary for more accurate and comprehensive analysis of ACC performance and more reliable optimization of ACC operations.
NASA Astrophysics Data System (ADS)
Altunay, Nail
2018-01-01
The current study reports, for the first time, the development of a new analytical method employing ultrasound assisted-cloud point extraction (UA-CPE) for the extraction of CH3Hg+ and Hg2 + species from fish samples. Detection and quantification of mercury species were performed at 550 nm by spectrophotometry. The analytical variables affecting complex formation and extraction efficiency were extensively evaluated and optimized by univariate method. Due to behave 14-fold more sensitive and selective of thiophene 2,5-dicarboxylic acid (H2TDC) to Hg2 + ions than CH3Hg+ in presence of mixed surfactant, Tween 20 and SDS at pH 5.0, the amounts of free Hg2 + and total Hg were spectrophotometrically established at 550 nm by monitoring Hg2 + in the pretreated- and extracted-fish samples in ultrasonic bath to speed up extraction using diluted acid mixture (1:1:1, v/v, 4 mol L- 1 HNO3, 4 mol L- 1 HCl, and 0.5 mol L- 1 H2O2), before and after pre-oxidation with permanganate in acidic media. The amount of CH3Hg+ was calculated from difference between total Hg and Hg2 + amounts. The UA-CPE method showed to be suitable for the extraction and determination of mercury species in certified reference materials. The results were in a good agreement (with Student's t-test at 95% confidence limit) with the certified values, and the relative standard deviation was lower than 3.2%. The limits of detection have been 0.27 and 1.20 μg L- 1, for Hg2 + from aqueous calibration solutions and matrix-matched calibration solutions spiked before digestion, respectively, while it is 2.43 μg L- 1 for CH3Hg+ from matrix-matched calibration solutions. A significant matrix effect was not observed from comparison of slopes of both calibration curves, so as to represent the sample matrix. The method was applied to fish samples for speciation analysis of Hg2 + and CH3Hg+. In terms of speciation, while total Hg is detected in range of 2.42-32.08 μg kg- 1, the distribution of mercury in fish were in range of 0.7-11.06 μg kg- 1 for CH3Hg+ and in range of 1.72-24.56 μg kg- 1 for Hg2 +.
Determination of total selenium in food samples by d-CPE and HG-AFS.
Wang, Mei; Zhong, Yizhou; Qin, Jinpeng; Zhang, Zehua; Li, Shan; Yang, Bingyi
2017-07-15
A dual-cloud point extraction (d-CPE) procedure was developed for the simultaneous preconcentration and determination of trace level Se in food samples by hydride generation-atomic fluorescence spectrometry (HG-AFS). The Se(IV) was complexed with ammonium pyrrolidinedithiocarbamate (APDC) in a Triton X-114 surfactant-rich phase, which was then treated with a mixture of 16% (v/v) HCl and 20% (v/v) H 2 O 2 . This converted the Se(IV)-APDC into free Se(IV), which was back extracted into an aqueous phase at the second cloud point extraction stage. This aqueous phase was analyzed directly by HG-AFS. Optimization of the experimental conditions gave a limit of detection of 0.023μgL -1 with an enhancement factor of 11.8 when 50mL of sample solution was preconcentrated to 3mL. The relative standard deviation was 4.04% (c=6.0μgL -1 , n=10). The proposed method was applied to determine the Se contents in twelve food samples with satisfactory recoveries of 95.6-105.2%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Method for accurate growth of vertical-cavity surface-emitting lasers
Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.
1995-01-01
We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.
Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area
Im, Jun-Hyuck; Im, Sung-Hyuck; Jee, Gyu-In
2016-01-01
Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. PMID:27517936
n-SIFT: n-dimensional scale invariant feature transform.
Cheung, Warren; Hamarneh, Ghassan
2009-09-01
We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.
Automated feature detection and identification in digital point-ordered signals
Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.
1998-01-01
A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
SafePort Proposal - Henry Laboratory 2010
2013-11-14
with the high bonding tempature of glass and the melting point of the metals we used . As a result, we focused more on studies done comparing PDMS, PMMA...strength samples. A method was developed for removing the majority of the interfering high concentration species using solid phase extraction. Using the...These polymers were chosen because they are the most common polymers used in microfluidics and can be manufactured via a wide range of methods
Hanjabam, Mandakini Devi; Kannaiyan, Sathish Kumar; Kamei, Gaihiamngam; Jakhar, Jitender Kumar; Chouksey, Mithlesh Kumar; Gudipati, Venkateshwarlu
2015-02-01
Physical properties of gelatin extracted from Unicorn leatherjacket (Aluterus monoceros) skin, which is generated as a waste from fish processing industries, were optimised using Response Surface Methodology (RSM). A Box-Behnken design was used to study the combined effects of three independent variables, namely phosphoric acid (H3PO4) concentration (0.15-0.25 M), extraction temperature (40-50 °C) and extraction time (4-12 h) on different responses like yield, gel strength and melting point of gelatin. The optimum conditions derived by RSM for the yield (10.58%) were 0.2 M H3PO4 for 9.01 h of extraction time and hot water extraction of 45.83 °C. The maximum achieved gel strength and melting point was 138.54 g and 22.61 °C respectively. Extraction time was found to be most influencing variable and had a positive coefficient on yield and negative coefficient on gel strength and melting point. The results indicated that Unicorn leatherjacket skins can be a source of gelatin having mild gel strength and melting point.
Population Estimation in Singapore Based on Remote Sensing and Open Data
NASA Astrophysics Data System (ADS)
Guo, H.; Cao, K.; Wang, P.
2017-09-01
Population estimation statistics are widely used in government, commercial and educational sectors for a variety of purposes. With growing emphases on real-time and detailed population information, data users nowadays have switched from traditional census data to more technology-based data source such as LiDAR point cloud and High-Resolution Satellite Imagery. Nevertheless, such data are costly and periodically unavailable. In this paper, the authors use West Coast District, Singapore as a case study to investigate the applicability and effectiveness of using satellite image from Google Earth for extraction of building footprint and population estimation. At the same time, volunteered geographic information (VGI) is also utilized as ancillary data for building footprint extraction. Open data such as Open Street Map OSM could be employed to enhance the extraction process. In view of challenges in building shadow extraction, this paper discusses several methods including buffer, mask and shape index to improve accuracy. It also illustrates population estimation methods based on building height and number of floor estimates. The results show that the accuracy level of housing unit method on population estimation can reach 92.5 %, which is remarkably accurate. This paper thus provides insights into techniques for building extraction and fine-scale population estimation, which will benefit users such as urban planners in terms of policymaking and urban planning of Singapore.
Self-position estimation using terrain shadows for precise planetary landing
NASA Astrophysics Data System (ADS)
Kuga, Tomoki; Kojima, Hirohisa
2018-07-01
In recent years, the investigation of moons and planets has attracted increasing attention in several countries. Furthermore, recently developed landing systems are now expected to reach more scientifically interesting areas close to hazardous terrain, requiring precise landing capabilities within a 100 m range of the target point. To achieve this, terrain-relative navigation (capable of estimating the position of a lander relative to the target point on the ground surface is actively being studied as an effective method for achieving highly accurate landings. This paper proposes a self-position estimation method using shadows on the terrain based on edge extraction from image processing algorithms. The effectiveness of the proposed method is validated through numerical simulations using images generated from a digital elevation model of simulated terrains.
Kassem, Mohammed A; Amin, Alaa S
2015-02-05
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4'-nitro-2',6'-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50°C, the surfactant-rich phase was heated again at 100°C to remove water after decantation and the remaining phase was dissolved using 0.5mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75ngmL(-1) and the detection limit was 0.15ngmL(-1) of the original solution. The enhancement factor of 500 was achieved for 250mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection
NASA Astrophysics Data System (ADS)
Tomono, Akira; Iida, Muneo; Kobayashi, Yukio
1990-04-01
This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.
Hartmann, Georg; Schuster, Michael
2013-01-25
The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Dağdeviren, Semahat; Altunay, Nail; Sayman, Yasin; Gürkan, Ramazan
2018-07-30
The study developed a new method for proline detection in honey, wine and fruit juice using ultrasound assisted-cloud point extraction (UA-CPE) and spectrophotometry. Initially, a quaternary complex was built, containing proline, histamine, Cu(II), and fluorescein at pH 5.5. Samples were treated with ethanol-water mixture before extraction and preconcentration, using an ultrasonic bath for 10 min at 40 °C (40 kHz, 300 W). After the optimization of variables affecting extraction efficiency, good linearity was obtained between 15 and 600 µg L -1 with sensitivity enhancement factor of 105. The limits of detection and quantification were 5.7 and 19.0 µg L -1 , respectively. The recovery percentage and relative standard deviations (RSD %) were between 95.3 and 103.3%, and 2.5 and 4.2%, respectively. The accuracy of the method was verified by the analysis of a standard reference material (SRM 2389a). Copyright © 2018 Elsevier Ltd. All rights reserved.
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong
2012-11-15
A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH=7.0, Triton X-114=0.05% (w/v), 8-HQ=2.0×10(-4) mol L(-1), HNO3=0.8 mol L(-1)), the detection limits for Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L(-1), respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L(-1) were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Computerized breast parenchymal analysis on DCE-MRI
NASA Astrophysics Data System (ADS)
Li, Hui; Giger, Maryellen L.; Yuan, Yading; Jansen, Sanaz A.; Lan, Li; Bhooshan, Neha; Newstead, Gillian M.
2009-02-01
Breast density has been shown to be associated with the risk of developing breast cancer, and MRI has been recommended for high-risk women screening, however, it is still unknown how the breast parenchymal enhancement on DCE-MRI is associated with breast density and breast cancer risk. Ninety-two DCE-MRI exams of asymptomatic women with normal MR findings were included in this study. The 3D breast volume was automatically segmented using a volume-growing based algorithm. The extracted breast volume was classified into fibroglandular and fatty regions based on the discriminant analysis method. The parenchymal kinetic curves within the breast fibroglandular region were extracted and categorized by use of fuzzy c-means clustering, and various parenchymal kinetic characteristics were extracted from the most enhancing voxels. Correlation analysis between the computer-extracted percent dense measures and radiologist-noted BIRADS density ratings yielded a correlation coefficient of 0.76 (p<0.0001). From kinetic analyses, 70% (64/92) of most enhancing curves showed persistent curve type and reached peak parenchymal intensity at the last postcontrast time point; with 89% (82/92) of most enhancing curves reaching peak intensity at either 4th or 5th post-contrast time points. Women with dense breast (BIRADS 3 and 4) were found to have more parenchymal enhancement at their peak time point (Ep) with an average Ep of 116.5% while those women with fatty breasts (BIRADS 1 and 2) demonstrated an average Ep of 62.0%. In conclusion, breast parenchymal enhancement may be associated with breast density and may be potential useful as an additional characteristic for assessing breast cancer risk.
Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; ...
2015-12-31
Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (V cmax). Estimating this parameter using A–C i curves (net photosynthesis, A, vs intercellular CO 2 concentration, C i) is laborious, which limits availability of V cmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO 2 concentration (A sat) measurements, from which V cmax can be extracted using a ‘one-point method’.
Extension of the tridiagonal reduction (FEER) method for complex eigenvalue problems in NASTRAN
NASA Technical Reports Server (NTRS)
Newman, M.; Mann, F. I.
1978-01-01
As in the case of real eigenvalue analysis, the eigensolutions closest to a selected point in the eigenspectrum were extracted from a reduced, symmetric, tridiagonal eigenmatrix whose order was much lower than that of the full size problem. The reduction process was effected automatically, and thus avoided the arbitrary lumping of masses and other physical quantities at selected grid points. The statement of the algebraic eigenvalue problem admitted mass, damping, and stiffness matrices which were unrestricted in character, i.e., they might be real, symmetric or nonsymmetric, singular or nonsingular.
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
Personal authentication using hand vein triangulation and knuckle shape.
Kumar, Ajay; Prathyusha, K Venkata
2009-09-01
This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.
Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking
NASA Astrophysics Data System (ADS)
Antonya, C.
2017-12-01
Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.
Kartal Temel, Nuket; Gürkan, Ramazan
2018-03-01
A novel ultrasound assisted-cloud point extraction method was developed for preconcentration and determination of V(V) in beverage samples. After complexation by pyrogallol in presence of safranin T at pH 6.0, V(V) ions as ternary complex are extracted into the micellar phase of Triton X-114. The complex was monitored at 533 nm by spectrophotometry. The matrix effect on the recovery of V(V) from the spiked samples at 50 μg L-1 was evaluated. In optimized conditions, the limits of detection and quantification of the method, respectively, was 0.58 and 1.93 μg L-1 in linear range of 2-500 μg L-1 with sensitivity enhancement and preconcentration factors of 47.7 and 40 for preconcentration from 15 mL of sample solution. The recoveries from spiked samples were in range of 93.8-103.2% with a relative standard deviation ranging from 2.6% to 4.1% (25, 100 and 250 μg L-1, n: 5). The accuracy was verified by analysis of two certified samples, and the results were in a good agreement with the certified values. The intra-day and inter-day precision were tested by reproducibility (as 3.3-3.4%) and repeatability (as 3.4-4.1%) analysis for five replicate measurements of V(V) in quality control samples spiked with 5, 10 and 15 μg L-1. Trace V(V) contents of the selected beverage samples by the developed method were successfully determined.
Delefortrie, Quentin; Schatt, Patricia; Grimmelprez, Alexandre; Gohy, Patrick; Deltour, Didier; Collard, Geneviève; Vankerkhoven, Patrick
2016-02-01
Although colonoscopy associated with histopathological sampling remains the gold standard in the diagnostic and follow-up of inflammatory bowel disease (IBD), calprotectin is becoming an essential biomarker in gastroenterology. The aim of this work is to compare a newly developed kit (Liaison® Calprotectin - Diasorin®) and its two distinct extraction protocols (weighing and extraction device protocol) with a well established point of care test (Quantum Blue® - Bühlmann-Alere®) in terms of analytical performances and ability to detect relapses amongst a Crohn's population in follow-up. Stool specimens were collected over a six month period and were composed of control and Crohn's patients. Amongst the Crohn's population disease activity (active vs quiescent) was evaluated by gastroenterologists. A significant difference was found between all three procedures in terms of calprotectin measurements (weighing protocol=30.3μg/g (median); stool extraction device protocol=36.9μg/g (median); Quantum Blue® (median)=63; Friedman test, P value=0.05). However, a good correlation was found between both extraction methods coupled with the Liaison® analyzer and between the Quantum Blue® (weighing protocol/extraction device protocol Rs=0.844, P=0.01; Quantum Blue®/extraction device protocol Rs=0.708, P=0.01; Quantum Blue®/weighing protocol, Rs=0.808, P=0.01). Finally, optimal cut-offs (and associated negative predictive values - NPV) for detecting relapses were in accordance with above results (Quantum Blue® 183.5μg/g and NPV of 100%>extraction device protocol+Liaison® analyzer 124.5μg/g and NPV of 93.5%>weighing protocol+Liaison® analyzer 106.5μg/g and NPV of 95%). Although all three methods correlated well and had relatively good NPV in terms of detecting relapses amongst a Crohn's population in follow-up, the lack of any international standard is the origin of different optimal cut-offs between the three procedures. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Pu, Jinji; Guo, Jianrong; Fan, Zaifeng
2014-01-01
Small RNAs, including microRNAs (miRNAs) and small interfering RNAs (siRNAs), are important regulators of plant development and gene expression. The acquisition of high-quality small RNAs is the first step in the study of its expression and function analysis, yet the extraction method of small RNAs in recalcitrant plant tissues with various secondary metabolites is not well established, especially for tropical and subtropical plant species rich in polysaccharides and polyphenols. Here, we developed a simple and efficient method for high quality small RNAs extraction from recalcitrant plant species. Prior to RNA isolation, a precursory step with a CTAB-PVPP buffer system could efficiently remove compounds and secondary metabolites interfering with RNAs from homogenized lysates. Then, total RNAs were extracted by Trizol reagents followed by a differential precipitation of high-molecular-weight (HMW) RNAs using polyethylene glycol (PEG) 8000. Finally, small RNAs could be easily recovered from supernatant by ethanol precipitation without extra elimination steps. The isolated small RNAs from papaya showed high quality through a clear background on gel and a distinct northern blotting signal with miR159a probe, compared with other published protocols. Additionally, the small RNAs extracted from papaya were successfully used for validation of both predicted miRNAs and the putative conserved tasiARFs. Furthermore, the extraction method described here was also tested with several other subtropical and tropical plant tissues. The purity of the isolated small RNAs was sufficient for such applications as end-point stem-loop RT-PCR and northern blotting analysis, respectively. The simple and feasible extraction method reported here is expected to have excellent potential for isolation of small RNAs from recalcitrant plant tissues rich in polyphenols and polysaccharides. PMID:24787387
Extraction and analysis of cortisol from human and monkey hair.
Meyer, Jerrold; Novak, Melinda; Hamel, Amanda; Rosenberg, Kendra
2014-01-24
The stress hormone cortisol (CORT) is slowly incorporated into the growing hair shaft of humans, nonhuman primates, and other mammals. We developed and validated a method for CORT extraction and analysis from rhesus monkey hair and subsequently adapted this method for use with human scalp hair. In contrast to CORT "point samples" obtained from plasma or saliva, hair CORT provides an integrated measure of hypothalamic-pituitary-adrenocortical (HPA) system activity, and thus physiological stress, during the period of hormone incorporation. Because human scalp hair grows at an average rate of 1 cm/month, CORT levels obtained from hair segments several cm in length can potentially serve as a biomarker of stress experienced over a number of months. In our method, each hair sample is first washed twice in isopropanol to remove any CORT from the outside of the hair shaft that has been deposited from sweat or sebum. After drying, the sample is ground to a fine powder to break up the hair's protein matrix and increase the surface area for extraction. CORT from the interior of the hair shaft is extracted into methanol, the methanol is evaporated, and the extract is reconstituted in assay buffer. Extracted CORT, along with standards and quality controls, is then analyzed by means of a sensitive and specific commercially available enzyme immunoassay (EIA) kit. Readout from the EIA is converted to pg CORT per mg powdered hair weight. This method has been used in our laboratory to analyze hair CORT in humans, several species of macaque monkeys, marmosets, dogs, and polar bears. Many studies both from our lab and from other research groups have demonstrated the broad applicability of hair CORT for assessing chronic stress exposure in natural as well as laboratory settings.
Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo
2010-01-01
The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.
3D local feature BKD to extract road information from mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang
2017-08-01
Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.
Determination of Prometryne in water and soil by HPLC-UV using cloud-point extraction.
Zhou, Jihai; Chen, Jiandong; Cheng, Yanhong; Li, Daming; Hu, Feng; Li, Huixin
2009-07-15
A CPE-HPLC (UV) method has been developed for the determination of Prometryne. In this method, non-ionic surfactant Triton X-114 was first used to extract and pre-concentrate Prometryne from water and soil samples. The separation and determination of Prometryne were then carried out in an HPLC-UV system with isocratic elution using a detector set at 254 nm wavelength. The parameters and variables that affected the extraction were also investigated and the optimal conditions were found to be 0.5% of Triton X-114 (w/v), 3% of NaCl (w/v) and heat-assisted at 50 degrees C for 30 min. Using these conditions, the recovery rates of Prometryne ranged from 92.84% to 99.23% in water and 85.48% to 93.67% in soil, respectively, with all the relative standard deviations less than 3.05%. Limit of detection (LOD) and limit of quantification (LOQ) were 3.5 microg L(-1) and 11.0 microg L(-1) in water and 4.0 microg kg(-1) and 13.0 microg kg(-1) in soil, respectively. Thus, we developed a method that has proven to be an efficient, green, rapid and inexpensive approach for extraction and determination of Prometryne from soil samples.
Zhang, Dan; Park, Jin-A; Kim, Seong-Kwan; Cho, Sang-Hyun; Cho, Soo-Min; Shim, Jae-Han; Kim, Jin-Suk; Abd El-Aty, A M; Shin, Ho-Chul
2017-06-01
In this study, an analytical method was developed for quantification of residues of the anthelmintic drug phenothiazine (PTZ) in pork muscle using liquid chromatography-tandem mass spectrometry. Muscles were extracted using 0.2% formic acid and 10 mm ammonium formate in acetonitrile, defatted and purified using n-hexane. The drug was well separated on a Waters XBridge™ C 18 analytical column using a binary solvent system consisting of 0.2% formic acid and 10 mm ammonium formate in ultrapure water (A) and acetonitrile (B). Good linearity was achieved over a six-point concentration range in matrix-matched calibration with determination coefficient =0.9846. Fortified pork muscle having concentrations equivalent to and double the limit of quantification (1 ng/g) yielded recovery ranges between 100.82 and 104.03% and relative standard deviations <12%. Samples (n = 5) collected from large markets located in Seoul City tested negative for PTZ residue. In conclusion, 0.2% formic acid and ammonium formate in acetonitrile can effectively extract PTZ from pork muscle without solid-phase extraction, a step normally required for cleanup before analysis and the validated method can be used for routine analysis to ensure the quality of animal products. Copyright © 2016 John Wiley & Sons, Ltd.
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun
2014-11-01
Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.
NASA Astrophysics Data System (ADS)
Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.
2017-03-01
DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel
2017-01-01
The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.
LESTO: an Open Source GIS-based toolbox for LiDAR analysis
NASA Astrophysics Data System (ADS)
Franceschi, Silvia; Antonello, Andrea; Tonon, Giustino
2015-04-01
During the last five years different research institutes and private companies stared to implement new algorithms to analyze and extract features from LiDAR data but only a few of them also created a public available software. In the field of forestry there are different examples of software that can be used to extract the vegetation parameters from LiDAR data, unfortunately most of them are closed source (even if free), which means that the source code is not shared with the public for anyone to look at or make changes to. In 2014 we started the development of the library LESTO (LiDAR Empowered Sciences Toolbox Opensource): a set of modules for the analysis of LiDAR point cloud with an Open Source approach with the aim of improving the performance of the extraction of the volume of biomass and other vegetation parameters on large areas for mixed forest structures. LESTO contains a set of modules for data handling and analysis implemented within the JGrassTools spatial processing library. The main subsections are dedicated to 1) preprocessing of LiDAR raw data mainly in LAS format (utilities and filtering); 2) creation of raster derived products; 3) flight-lines identification and normalization of the intensity values; 4) tools for extraction of vegetation and buildings. The core of the LESTO library is the extraction of the vegetation parameters. We decided to follow the single tree based approach starting with the implementation of some of the most used algorithms in literature. These have been tweaked and applied on LiDAR derived raster datasets (DTM, DSM) as well as point clouds of raw data. The methods range between the simple extraction of tops and crowns from local maxima, the region growing method, the watershed method and individual tree segmentation on point clouds. The validation procedure consists in finding the matching between field and LiDAR-derived measurements at individual tree and plot level. An automatic validation procedure has been developed considering an Optimizer Algorithm based on Particle Swarm (PS) and a matching procedure which takes the position and the height of the extracted trees respect to the measured ones and iteratively tries to improve the candidate solution changing the models' parameters. Example of application of the LESTO tools will be presented on test sites. Test area consists in a series of circular sampling plots randomly selected from a 50x50 m regular grid within a buffer zone of 150 m from the forest road. Other studies on the same sites take as reference measurements of position, diameter, species and height and proposed allometric relationships. These allometric relationship were obtained for each species deriving the stem volume of single trees based on height and diameter at breast height. LESTO is integrated in the JGrassTools project and available for download at www.jgrasstools.org. A simple and easy to use graphical interface to run the models is available at https://github.com/moovida/STAGE/releases.
A Bayesian framework for extracting human gait using strong prior knowledge.
Zhou, Ziheng; Prügel-Bennett, Adam; Damper, Robert I
2006-11-01
Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
An integrand reconstruction method for three-loop amplitudes
NASA Astrophysics Data System (ADS)
Badger, Simon; Frellesvig, Hjalte; Zhang, Yang
2012-08-01
We consider the maximal cut of a three-loop four point function with massless kinematics. By applying Gröbner bases and primary decomposition we develop a method which extracts all ten propagator master integral coefficients for an arbitrary triple-box configuration via generalized unitarity cuts. As an example we present analytic results for the three loop triple-box contribution to gluon-gluon scattering in Yang-Mills with adjoint fermions and scalars in terms of three master integrals.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
Duplicate document detection in DocBrowse
NASA Astrophysics Data System (ADS)
Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien
1998-04-01
Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.
Oberg, Tomas
2004-01-01
Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.
Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo
NASA Astrophysics Data System (ADS)
Daily, David; Kiser, Jillian; McQueen, Sarah
2016-11-01
Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.
Tandan, Monika; Hegde, Mithra N; Hegde, Priyadarshini
2014-01-01
The aim was to determine the effect of four different intracanal medicaments on the apical seal of the root canal system in vitro. Fifty freshly extracted intact human permanent maxillary central incisors were collected, stored and disinfected. The root canals were prepared to a master apical size of number 50 using step back technique. Depending upon the intracanal medicament used, the teeth were divided randomly into five groups of 10 teeth each including one control group and four experimental groups. Group A: No intracanal medicament. Group B: Calcium hydroxide powder mixed with distilled water. Group C: Calcium hydroxide gutta percha points (calcium hydroxide points). Group D: 1% chlorhexidine gel (hexigel). Group E: Chlorhexidine gutta percha points (Roeko Activ Points). The medication was left in canals for 14 days. Following removal of the intracanal medicament, all the groups were obturated with lateral compaction technique. The apical leakage was then evaluated using dye extraction method with the help of a spectrophotometer. RESULTS were statistically analyzed using Kruskal-Wallis and Mann-Whitney U-test, which showed statistically significant difference among the five groups tested. It can be concluded from this study that the control group showed least amount of leakage, whereas the 1% chlorhexidine gel group showed maximum amount of leakage. Apical leakage was observed with all the experimental groups with little variations in between them. Under the parameters of this study, it can be concluded that use of intracanal medicaments during endodontic treatment has a definite impact on the apical seal of the root canal system.
Camera calibration based on the back projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui
2015-12-01
Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.
Montoro, Paola; Maldini, Mariateresa; Luciani, Leonilda; Tuberoso, Carlo I G; Congiu, Francesca; Pizza, Cosimo
2012-08-01
Radical scavenging activities of Crocus sativus petals, stamens and entire flowers, which are waste products in the production of the spice saffron, by employing ABTS radical scavenging method, were determined. At the same time, the metabolic profiles of different extract (obtained by petals, stamens and flowers) were obtained by LC-ESI-IT MS (liquid chromatography coupled with electrospray mass spectrometry equipped with Ion Trap analyser). LC-ESI-MS is a techniques largely used nowadays for qualitative fingerprint of herbal extracts and particularly for phenolic compounds. To compare the different extracts under an analytical point of view a specific method for qualitative LC-MS analysis was developed. The high variety of glycosylated flavonoids found in the metabolic profiles could give value to C. sativus petals, stamens and entire flowers. Waste products obtained during saffron production, could represent an interesting source of phenolic compounds, with respect to the high variety of compounds and their free radical scavenging activity. © 2012 Institute of Food Technologists®
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
Anderson, Chastain A; Bokota, Rachael E; Majeste, Andrew E; Murfee, Walter L; Wang, Shusheng
2018-01-18
Electronic cigarettes are the most popular tobacco product among middle and high schoolers and are the most popular alternative tobacco product among adults. High quality, reproducible research on the consequences of electronic cigarette use is essential for understanding emerging public health concerns and crafting evidence based regulatory policy. While a growing number of papers discuss electronic cigarettes, there is little consistency in methods across groups and very little consensus on results. Here, we describe a programmable laboratory device that can be used to create extracts of conventional cigarette smoke and electronic cigarette aerosol. This protocol details instructions for the assembly and operation of said device, and demonstrates the use of the generated extract in two sample applications: an in vitro cell viability assay and gas-chromatography mass-spectrometry. This method provides a tool for making direct comparisons between conventional cigarettes and electronic cigarettes, and is an accessible entry point into electronic cigarette research.
NASA Astrophysics Data System (ADS)
Dayuti, S.
2018-04-01
Red alga was widely used in several fields, including food, feed, phamacy and industrial point of view. The chemical analysis showed that red alga contained terpenoid, acetogenic, and aromatic compounds, which have a wide range of biological activities, such as anti-micobial, anti-inflammatory and anti-viral. The objectives of this research was to evaluate the effect of extraction solvent and time on antibacterial activity of red alga (Gracilaria verrucosa), and to explore the bioactive compound contained within Gracilaria verrucosa. The method in this study used descriptive reseach. These findings revealed that the highest inhibition activity among all extracts was obtained with the ratio of methanol:aquades (75:25) and extraction time around 72 hours against Escherichia coli and Salmonella typhimurium. The bioactive compounds of Gracilaria verrucosa tested by phytochemical analysisi consisted of flavonoid, alkaloid, and saponin. Those secondary metabolites may be approximated as antibactial substances.
NASA Astrophysics Data System (ADS)
Riera, Enrique; Blanco, Alfonso; García, José; Benedito, José; Mulet, Antonio; Gallego-Juárez, Juan A.; Blasco, Miguel
2010-01-01
Oil is an important component of almonds and other vegetable substrates that can show an influence on human health. In this work the development and validation of an innovative, robust, stable, reliable and efficient ultrasonic system at pilot scale to assist supercritical CO2 extraction of oils from different substrates is presented. In the extraction procedure ultrasonic energy represents an efficient way of producing deep agitation enhancing mass transfer processes because of some mechanisms (radiation pressure, streaming, agitation, high amplitude vibrations, etc.). A previous work to this research pointed out the feasibility of integrating an ultrasonic field inside a supercritical extractor without losing a significant volume fraction. This pioneer method enabled to accelerate mass transfer and then, improving supercritical extraction times. To commercially develop the new procedure fulfilling industrial requirements, a new configuration device has been designed, implemented, tested and successfully validated for supercritical fluid extraction of oil from different vegetable substrates.
Hand biometric recognition based on fused hand geometry and vascular patterns.
Park, GiTae; Kim, Soowon
2013-02-28
A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.
Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns
Park, GiTae; Kim, Soowon
2013-01-01
A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119
Dolcet, Marta M; Torres, Mercè; Canela, Ramon
2016-06-25
The use of mycelia as biocatalysts has technical and economic advantages. However, there are several difficulties in obtaining accurate results in mycelium-catalysed reactions. Firstly, sample extraction, indispensable because of the presence of mycelia, can bring into the extract components with a similar structure to that of the analyte of interest; secondly, mycelia can influence the recovery of the analyte. We prepared calibration standards of 3-phenoxy-1,2-propanediol (PPD) in the pure solvent and in the presence of mycelia (spiked before or after extraction) from five fungi (Aspergillus niger, Aspergillus tubingensis, Penicillium aurantiogriseum, Penicillium sp. and Aspergillus terreus). The quantification of PPD was carried out by HPLC-UV and UV-vis spectrophotometry. The manuscript shows that the last method is as accurate as the HPLC method. However, the colorimetric method led to a higher data throughput, which allowed the study of more samples in a shorter time. Matrix effects were evaluated visually from the plotted calibration data and statistically by simultaneously comparing the intercept and slope of calibration curves performed with solvent, post-extraction spiked standards and pre-extraction spiked standards. Significant differences were found between the post- and pre-extraction spiked matrix-matched functions. Pre-extraction spiked matrix-matched functions based on A. tubingensis mycelia, selected as the reference, were validated and used to compensate for low recoveries. These validated functions were successfully applied to the quantification of PPD achieved during the hydrolysis of glycidyl phenyl ether by mycelium-bound epoxide hydrolases and equivalent hydrolysis yields were determined by HPLC-UV and UV-vis spectrophotometry. This study may serve as starting point to implement matrix effects evaluation when mycelium-bound epoxide hydrolases are studied. Copyright © 2016 Elsevier B.V. All rights reserved.
Multifactorial biogeochemical monitoring of linden alley in Moscow
NASA Astrophysics Data System (ADS)
Ermakov, Vadim; Khushvakhtova, Sabsbakhor; Tyutikov, Sergey; Danilova, Valentina; Roca, Núria; Bech, Jaume
2015-04-01
The ecological and biogeochemical assessment of the linden alley within the Kosygin Street was conducted by means of an integrated comparative study of soils, their chemical composition and morphological parameters of leaf linden. For this purpose 5 points were tested within the linden alley and 5 other points outside the highway. In soils, water extract of soil, leaf linden the content of Cu, Pb, Mn, Fe, Cd, Zn, As, Ni, Co Mo, Cr and Se were determined by AAS and spectrofluorimetric method [1]. Macrocomponents (Ca, Mg, K, Na, P, sulphates, chlorides), pH and total mineralization of water soil extract were measured by generally accepted methods. Thio-containing compounds in the leaves were determined by HPLC-NAM spectrofluorometry [2]. On level content of trace elements the soils of "contaminated" points different from background more high concentrations of lead, manganese, iron, selenium, strontium and low level of zinc. Leaf of linden from contaminated sites characterized by an increase of lead, copper, iron, zinc, arsenic, chromium, and a sharp decrease in the level of manganese and strontium. Analysis of the aqueous extracts of the soil showed a slight decrease in the pH value in the "control" points and lower content of calcium, magnesium, potassium, sodium and total mineralization of the water soil extract. The phytochelatins test in the leaves of linden was weakly effective and the degree of asymmetry of leaf lamina too. The most differences between the variants were marked by the degree of pathology leaves (chlorosis and necrosis) and the content of pigments (chlorophyll and carotene). The data obtained reflect the impact of the application of de-icing salts and automobile emissions. References 1. Ermakov V.V., Danilova V.N., Khyshvakhtova S.D. Application of HPLC-NAM spectrofluorimtry to determination of sulfur-containing compounds in the environmental objects// Science of the biosphere: Innovation. Moscow State University by M.V. Lomonosov, 2014. P. 10-12. 2. Ermakov V.V., Tyutikov S.F., Khushvakhtova S.D., Danilova V.N., Boev V.N., Barabanschikova R.N., Chudinova E.A. Peculiarities of quantitative determination of selenium in biological materials// Bulletin of the Tyumen State University Press, 2010, 3, 206-214. Supported by the Russian Foundation for Basic Research, grant number 15-05-00279a
Grid point extraction and coding for structured light system
NASA Astrophysics Data System (ADS)
Song, Zhan; Chung, Ronald
2011-09-01
A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.
Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L
2016-01-15
Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.
Phytomedical investigation of Najas minor All. in the view of the chemical constituents
Topuzovic, Marina D.; Radojevic, Ivana D.; Dekic, Milan S.; Radulovic, Niko S.; Vasic, Sava M.; Comic, Ljiljana R.; Licina, Braho Z.
2015-01-01
Plants are an abundant natural source of effective antibiotic compounds. Phytomedical investigations of certain plants haven't still been conducted. One of them is Najas minor (N. minor), an aquatic plant with confirmed allelopathy. Research conducted in this study showed the influence of water and ethyl acetate extracts of N. minor on microorganisms, in the view of chemical profiling of volatile constituents and the concentrations of total phenols, flavonoids and tannins. Antimicrobial activity was defined by determining minimum inhibitory and minimum microbicidal concentrations using microdilution method. Influence on bacterial biofilm formation was performed by tissue culture plate method. The total phenolics, flavonoids and condensed tannins were determined by Folin-Ciocalteu, aluminum chloride and butanol-HCl colorimetric methods. Chemical profiling of volatile constituents was investigated by GC and GC-MS. Water extract didn't have antimicrobial activity below 5000 µg/mL. Ethyl acetate extract has shown strong antimicrobial activity on G+ bacteria - Staphylococcus aureus PMFKGB12 and Bacillus subtilis (MIC < 78.13 µg/mL). The best antibiofilm activity was obtained on Escherichia coli ATCC25922 (BIC50 at 719 µg/mL). Water extract had higher yield. Ethyl acetate extract had a significantly greater amount of total phenolics, flavonoids and tannins. As major constituent hexahydrofarnesyl acetone was identified. The ethyl acetate extract effected only G+ bacteria, but the biofilm formation of G-bacteria was suppressed. There was a connection between those in vivo and in vitro effects against pathogenic bacterial biofilm formation. All of this points to a so far unexplored potential of N. minor. PMID:26535038
Optimization and determination of polycyclic aromatic hydrocarbons in biochar-based fertilizers.
Chen, Ping; Zhou, Hui; Gan, Jay; Sun, Mingxing; Shang, Guofeng; Liu, Liang; Shen, Guoqing
2015-03-01
The agronomic benefit of biochar has attracted widespread attention to biochar-based fertilizers. However, the inevitable presence of polycyclic aromatic hydrocarbons in biochar is a matter of concern because of the health and ecological risks of these compounds. The strong adsorption of polycyclic aromatic hydrocarbons to biochar complicates their analysis and extraction from biochar-based fertilizers. In this study, we optimized and validated a method for determining the 16 priority polycyclic aromatic hydrocarbons in biochar-based fertilizers. Results showed that accelerated solvent extraction exhibited high extraction efficiency. Based on a Box-Behnken design with a triplicate central point, accelerated solvent extraction was used under the following optimal operational conditions: extraction temperature of 78°C, extraction time of 17 min, and two static cycles. The optimized method was validated by assessing the linearity of analysis, limit of detection, limit of quantification, recovery, and application to real samples. The results showed that the 16 polycyclic aromatic hydrocarbons exhibited good linearity, with a correlation coefficient of 0.996. The limits of detection varied between 0.001 (phenanthrene) and 0.021 mg/g (benzo[ghi]perylene), and the limits of quantification varied between 0.004 (phenanthrene) and 0.069 mg/g (benzo[ghi]perylene). The relative recoveries of the 16 polycyclic aromatic hydrocarbons were 70.26-102.99%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solimo, H.N.; Martinez, H.E.; Riggio, R.
1989-04-01
Experimental mutual solubility and tie-line data were determined for three ternary liquid-liquid systems containing water, ethanol, and amyl acetate, benzyl alcohol, and methyl isobutyl ketone at 298.15{Kappa} in order to obtain their complete phase diagrams and to determine which is the most suitable solvent for extraction of ethanol from aqueous solutions. Tie lines were determined correlating the density of the binodal curve as a function of composition and the plait points using the Othmer and Tobias method. The experimental data were also correlated with the UNIFAC group contribution method. A qualitative agreement was obtained. Experimental results show that amyl acetatemore » is a better solvent than methyl isobutyl ketone and benzyl alcohol.« less
NASA Astrophysics Data System (ADS)
Spaans, K.; Hooper, A. J.
2017-12-01
The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.
NASA Astrophysics Data System (ADS)
Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan
2016-03-01
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
Threshold-free method for three-dimensional segmentation of organelles
NASA Astrophysics Data System (ADS)
Chan, Yee-Hung M.; Marshall, Wallace F.
2012-03-01
An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.
Ground settlement monitoring based on temporarily coherent points between two SAR acquisitions
Zhang, L.; Ding, X.; Lu, Z.
2011-01-01
An InSAR analysis approach for identifying and extracting the temporarily coherent points (TCP) that exist between two SAR acquisitions and for determining motions of the TCP is presented for applications such as ground settlement monitoring. TCP are identified based on the spatial characteristics of the range and azimuth offsets of coherent radar scatterers. A method for coregistering TCP based on the offsets of TCP is given to reduce the coregistration errors at TCP. An improved phase unwrapping method based on the minimum cost flow (MCF) algorithm and local Delaunay triangulation is also proposed for sparse TCP data. The proposed algorithms are validated using a test site in Hong Kong. The test results show that the algorithms work satisfactorily for various ground features.
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.
Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero
2008-09-01
Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
NASA Astrophysics Data System (ADS)
Peter, Simon; Leine, Remco I.
2017-11-01
Phase resonance testing is one method for the experimental extraction of nonlinear normal modes. This paper proposes a novel method for nonlinear phase resonance testing. Firstly, the issue of appropriate excitation is approached on the basis of excitation power considerations. Therefore, power quantities known from nonlinear systems theory in electrical engineering are transferred to nonlinear structural dynamics applications. A new power-based nonlinear mode indicator function is derived, which is generally applicable, reliable and easy to implement in experiments. Secondly, the tuning of the excitation phase is automated by the use of a Phase-Locked-Loop controller. This method provides a very user-friendly and fast way for obtaining the backbone curve. Furthermore, the method allows to exploit specific advantages of phase control such as the robustness for lightly damped systems and the stabilization of unstable branches of the frequency response. The reduced tuning time for the excitation makes the commonly used free-decay measurements for the extraction of backbone curves unnecessary. Instead, steady-state measurements for every point of the curve are obtained. In conjunction with the new mode indicator function, the correlation of every measured point with the associated nonlinear normal mode of the underlying conservative system can be evaluated. Moreover, it is shown that the analysis of the excitation power helps to locate sources of inaccuracies in the force appropriation process. The method is illustrated by a numerical example and its functionality in experiments is demonstrated on a benchmark beam structure.
Automatic comic page image understanding based on edge segment analysis
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai
2013-12-01
Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.
Robertson, J.F.; Aelion, C.M.; Vroblesky, D.A.
1993-01-01
Two passive soil-vapor sampling techniques were used in the vicinity of a defense fuel supply point in Hanahan, South Carolina, to identify areas of potential contamination of the shallow water table aquifer by volatile organic compounds (VOC's). Both techniques involved the burial of samplers in the vadose zone and the saturated bottom sediments of nearby streams. One method, the empty-tube technique, allowed vapors to pass through a permeable membrane and accumulate inside an inverted empty test tube. A sample was extracted and analyzed on site by using a portable gas chromatograph. As a comparison to this method, an activated-carbon technique, also was used in certain areas. This method uses a vapor collector consisting of a test tube containing activated carbon as a sorbent for VOC's.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Method for accurate growth of vertical-cavity surface-emitting lasers
Chalmers, S.A.; Killeen, K.P.; Lear, K.L.
1995-03-14
The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.
NASA Astrophysics Data System (ADS)
Maalek, R.; Lichti, D. D.; Ruwanpura, J.
2015-08-01
The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
Gai, Qingqing; Qu, Feng; Zhang, Tao; Zhang, Yukui
2011-07-15
Both of the magnetic particle adsorption and aqueous two-phase extraction (ATPE) were simple, fast and low-cost method for protein separation. Selective proteins adsorption by carboxyl modified magnetic particles was investigated according to protein isoelectric point, solution pH and ionic strength. Aqueous two-phase system of PEG/sulphate exhibited selective separation and extraction for proteins before and after magnetic adsorption. The two combination ways, magnetic adsorption followed by ATPE and ATPE followed by magnetic adsorption, for the separation of proteins mixture of lysozyme, bovine serum albumin, trypsin, cytochrome C and myloglobin were discussed and compared. The way of magnetic adsorption followed by ATPE was also applied to human serum separation. Copyright © 2011 Elsevier B.V. All rights reserved.
The VLITE Post-Processing Pipeline
NASA Astrophysics Data System (ADS)
Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.
2018-01-01
A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE;
Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, X.; Liu, H.
2017-09-01
The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.
A Data Cleaning Method for Big Trace Data Using Movement Consistency
Tang, Luliang; Zhang, Xia; Li, Qingquan
2018-01-01
Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456
3-Dimensional Reconstruction of the ROSETTA Targets - Application to Asteroid 2867 Steins
NASA Astrophysics Data System (ADS)
Besse, Sebastien; Groussin, O.; Jorda, L.; Lamy, P.; OSIRIS Team
2008-09-01
The OSIRIS imaging experiment aboard the Rosetta spacecraft will image asteroids Steins in September 2008 and Lutetia in 2010, and comet 67P/Churyumov-Gerasimenko in 2014. An accurate determination of the shape is a key point for the success of the mission operations and scientific objectives. Based on the experience of previous space missions (Deep Impact, Near, Galileo, Hayabusa), we are developing our own procedure for the shape reconstruction of small bodies. We use two different techniques : i) limb and terminator constraints and ii) ground control points (GCP) constraints. The first method allows the determination of a rough shape of the body when it is poorly resolved and no features are visible on the surface, while the second method provides an accurate shape model using high resolution images. We are currently testing both methods on simulated data, using and developing different algorithms for limb and terminator extraction (e.g.,wavelet), detection of points of interest (Harris, Susan, Fast Corner Detection), points pairing using correlation techniques (geometric model) and 3-dimensional reconstruction using line-of-sight information (photogrammetry). Both methods will be fully automated. We will hopefully present the 3D reconstruction of the Steins asteroid from images obtained during its flyby. Acknowledgment: Sébastien Besse acknowledges CNES and Thales for funding.
Contour-Based Corner Detection and Classification by Using Mean Projection Transform
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-01-01
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354
Contour-based corner detection and classification by using mean projection transform.
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-02-28
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.
NASA Astrophysics Data System (ADS)
Korzeniowska, Karolina; Mandlburger, Gottfried; Klimczyk, Agata
2013-04-01
The paper presents an evaluation of different terrain point extraction algorithms for Airborne Laser Scanning (ALS) point clouds. The research area covers eight test sites in the Małopolska Province (Poland) with varying point density between 3-15points/m² and surface as well as land cover characteristics. In this paper the existing implementations of algorithms were considered. Approaches based on mathematical morphology, progressive densification, robust surface interpolation and segmentation were compared. From the group of morphological filters, the Progressive Morphological Filter (PMF) proposed by Zhang K. et al. (2003) in LIS software was evaluated. From the progressive densification filter methods developed by Axelsson P. (2000) the Martin Isenburg's implementation in LAStools software (LAStools, 2012) was chosen. The third group of methods are surface-based filters. In this study, we used the hierarchic robust interpolation approach by Kraus K., Pfeifer N. (1998) as implemented in SCOP++ (Trimble, 2012). The fourth group of methods works on segmentation. From this filtering concept the segmentation algorithm available in LIS was tested (Wichmann V., 2012). The main aim in executing the automatic classification for ground extraction was operating in default mode or with default parameters which were selected by the developers of the algorithms. It was assumed that the default settings were equivalent to the parameters on which the best results can be achieved. In case it was not possible to apply an algorithm in default mode, a combination of the available and most crucial parameters for ground extraction were selected. As a result of these analyses, several output LAS files with different ground classification were achieved. The results were described on the basis of qualitative and quantitative analyses, both being in a formal description. The classification differences were verified on point cloud data. Qualitative verification of ground extraction was made on the basis of a visual inspection of the results (Sithole G., Vosselman G., 2004; Meng X. et al., 2010). The results of these analyses were described as a graph using weighted assumption. The quantitative analyses were evaluated on a basis of Type I, Type II and Total errors (Sithole G., Vosselman G., 2003). The achieved results show that the analysed algorithms yield different classification accuracies depending on the landscape and land cover. The simplest terrain for ground extraction was flat rural area with sparse vegetation. The most difficult were mountainous areas with very dense vegetation where only a few ground points were available. Generally the LAStools algorithm gives good results in every type of terrain, but the ground surface is too smooth. The LIS Progressive Morphological Filter algorithm gives good results in forested flat and low slope areas. The surface-based algorithm from SCOP++ gives good results in mountainous areas - both forested and built-up because it better preserves steep slopes, sharp ridges and breaklines, but sometimes it fails to remove off-terrain objects from the ground class. The segmentation-based algorithm in LIS gives quite good results in built-up flat areas, but in forested areas it does not work well. Bibliography: Axelsson, P., 2000. DEM generation from laser scanner data using adaptive TIN models. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIII (Pt. B4/1), 110- 117 Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with airborne laser scanner data. ISPRS Journal of Photogrammetry & Remote Sensing 53 (4), 193-203 LAStools website http://www.cs.unc.edu/~isenburg/lastools/ (verified in September 2012) Meng, X., Currit, N., Zhao, K., 2010. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sensing 2, 833-860 Sithole, G., Vosselman, G., 2003. Report: ISPRS Comparison of Filters. Commission III, Working Group 3. Department of Geodesy, Faculty of Civil Engineering and Geosciences, Delft University of technology, The Netherlands Sithole, G., Vosselman, G., 2004. Experimental comparison of filter algorithms for bare-Earth extraction form airborne laser scanning point clouds. ISPRS Journal of Photogrammetry & Remote Sensing 59, 85-101 Trimble, 2012 http://www.trimble.com/geospatial/aerial-software.aspx (verified in November 2012) Wichmann, V., 2012. LIS Command Reference, LASERDATA GmbH, 1-231 Zhang, K., Chen, S.-C., Whitman, D., Shyu, M.-L., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing non-ground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing, 41(4), 872-882
NASA Astrophysics Data System (ADS)
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
Schultz, M.M.; Furlong, E.T.
2008-01-01
Treated wastewater effluent is a potential environmental point source for antidepressant pharmaceuticals. A quantitative method was developed for the determination of trace levels of antidepressants in environmental aquatic matrixes using solid-phase extraction coupled with liquid chromatography- electrospray ionization tandem mass spectrometry. Recoveries of parent antidepressants from matrix spiking experiments for the individual antidepressants ranged from 72 to 118% at low concentrations (0.5 ng/L) and 70 to 118% at high concentrations (100 ng/L) for the solid-phase extraction method. Method detection limits for the individual antidepressant compounds ranged from 0.19 to 0.45 ng/L. The method was applied to wastewater effluent and samples collected from a wastewater-dominated stream. Venlafaxine was the predominant antidepressant observed in wastewater and river water samples. Individual antidepressant concentrations found in the wastewater effluent ranged from 3 (duloxetine) to 2190 ng/L (venlafaxine), whereas individual concentrations in the waste-dominated stream ranged from 0.72 (norfluoxetine) to 1310 ng/L (venlafaxine). ?? 2008 American Chemical Society.
An improved algorithm of laser spot center detection in strong noise background
NASA Astrophysics Data System (ADS)
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
2018-01-01
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
Hoff, Rodrigo Barcellos; Rübensam, Gabriel; Jank, Louise; Barreto, Fabiano; Peralba, Maria do Carmo Ruaro; Pizzolato, Tânia Mara; Silvia Díaz-Cruz, M; Barceló, Damià
2015-01-01
In residue analysis of veterinary drugs in foodstuff, matrix effects are one of the most critical points. This work present a discuss considering approaches used to estimate, minimize and monitoring matrix effects in bioanalytical methods. Qualitative and quantitative methods for estimation of matrix effects such as post-column infusion, slopes ratios analysis, calibration curves (mathematical and statistical analysis) and control chart monitoring are discussed using real data. Matrix effects varying in a wide range depending of the analyte and the sample preparation method: pressurized liquid extraction for liver samples show matrix effects from 15.5 to 59.2% while a ultrasound-assisted extraction provide values from 21.7 to 64.3%. The matrix influence was also evaluated: for sulfamethazine analysis, losses of signal were varying from -37 to -96% for fish and eggs, respectively. Advantages and drawbacks are also discussed considering a workflow for matrix effects assessment proposed and applied to real data from sulfonamides residues analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
Modifications in SIFT-based 3D reconstruction from image sequence
NASA Astrophysics Data System (ADS)
Wei, Zhenzhong; Ding, Boshen; Wang, Wei
2014-11-01
In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.
Barry, Michael J; Meleth, Sreelatha; Lee, Jeannette Y; Kreder, Karl J; Avins, Andrew L; Nickel, J Curtis; Roehrborn, Claus G; Crawford, E David; Foster, Harris E; Kaplan, Steven A; McCullough, Andrew; Andriole, Gerald L; Naslund, Michael J; Williams, O Dale; Kusek, John W; Meyers, Catherine M; Betz, Joseph M; Cantor, Alan; McVary, Kevin T
2011-09-28
Saw palmetto fruit extracts are widely used for treating lower urinary tract symptoms attributed to benign prostatic hyperplasia (BPH); however, recent clinical trials have questioned their efficacy, at least at standard doses (320 mg/d). To determine the effect of saw palmetto extract (Serenoa repens, from saw palmetto berries) at up to 3 times the standard dose on lower urinary tract symptoms attributed to BPH. A double-blind, multicenter, placebo-controlled randomized trial at 11 North American clinical sites conducted between June 5, 2008, and October 10, 2010, of 369 men aged 45 years or older, with a peak urinary flow rate of at least 4 mL/s, an American Urological Association Symptom Index (AUASI) score of between 8 and 24 at 2 screening visits, and no exclusions. One, 2, and then 3 doses (320 mg/d) of saw palmetto extract or placebo, with dose increases at 24 and 48 weeks. Difference in AUASI score between baseline and 72 weeks. Secondary outcomes included measures of urinary bother, nocturia, peak uroflow, postvoid residual volume, prostate-specific antigen level, participants' global assessments, and indices of sexual function, continence, sleep quality, and prostatitis symptoms. Between baseline and 72 weeks, mean AUASI scores decreased from 14.42 to 12.22 points (-2.20 points; 95% CI, -3.04 to -1.36) [corrected]with saw palmetto extract and from 14.69 to 11.70 points (-2.99 points; 95% CI, -3.81 to -2.17) with placebo. The group mean difference in AUASI score change from baseline to 72 weeks between the saw palmetto extract and placebo groups was 0.79 points favoring placebo (upper bound of the 1-sided 95% CI most favorable to saw palmetto extract was 1.77 points, 1-sided P = .91). Saw palmetto extract was no more effective than placebo for any secondary outcome. No clearly attributable adverse effects were identified. Increasing doses of a saw palmetto fruit extract did not reduce lower urinary tract symptoms more than placebo. clinicaltrials.gov Identifier: NCT00603304.
How Are They Now? Longer Term Effects of eCoaching through Online Bug-in-Ear Technology
ERIC Educational Resources Information Center
Rock, Marcia L.; Schumacker, Randall E.; Gregg, Madeleine; Howard, Pamela W.; Gable, Robert A.; Zigmond, Naomi
2014-01-01
In this study, using mixed methods, we investigated the longer term effects of eCoaching through advanced online bug-in-ear (BIE) technology. Quantitative data on five dependent variables were extracted from 14 participants' electronically archived video files at three points in time--Spring 1 (i.e., baseline, which was the first semester of…
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Hild, Kenneth E.; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia
2007-01-01
In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction.
Altunay, Nail; Gürkan, Ramazan
2015-05-15
A new cloud-point extraction (CPE) for the determination of antimony species in biological and beverages samples has been established with flame atomic absorption spectrometry (FAAS). The method is based on the fact that formation of the competitive ion-pairing complex of Sb(III) and Sb(V) with Victoria Pure Blue BO (VPB(+)) at pH 10. The antimony species were individually detected by FAAS. Under the optimized conditions, the calibration range for Sb(V) is 1-250 μg L(-1) with a detection limit of 0.25 μg L(-1) and sensitive enhancement factor of 76.3 while the calibration range for Sb(III) is 10-400 μg L(-1) with a detection limit of 5.15 μg L(-1) and sensitive enhancement factor of 48.3. The precision as a relative standard deviation is in range of 0.24-2.35%. The method was successfully applied to the speciative determination of antimony species in the samples. The validation was verified by analysis of certified reference materials (CRMs). Copyright © 2014 Elsevier Ltd. All rights reserved.
Extracting the information of coastline shape and its multiple representations
NASA Astrophysics Data System (ADS)
Liu, Ying; Li, Shujun; Tian, Zhen; Chen, Huirong
2007-06-01
According to studying the coastline, a new way of multiple representations is put forward in the paper. That is stimulating human thinking way when they generalized, building the appropriate math model and describing the coastline with graphics, extracting all kinds of the coastline shape information. The coastline automatic generalization will be finished based on the knowledge rules and arithmetic operators. Showing the information of coastline shape by building the curve Douglas binary tree, it can reveal the shape character of coastline not only microcosmically but also macroscopically. Extracting the information of coastline concludes the local characteristic point and its orientation, the curve structure and the topology trait. The curve structure can be divided the single curve and the curve cluster. By confirming the knowledge rules of the coastline generalization, the generalized scale and its shape parameter, the coastline automatic generalization model is established finally. The method of the multiple scale representation of coastline in this paper has some strong points. It is human's thinking mode and can keep the nature character of the curve prototype. The binary tree structure can control the coastline comparability, avoid the self-intersect phenomenon and hold the unanimous topology relationship.
Method and system for data clustering for very large databases
NASA Technical Reports Server (NTRS)
Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)
1998-01-01
Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.
Ohashi, Akira; Tsuguchi, Akira; Imura, Hisanori; Ohashi, Kousaburo
2004-07-01
The cloud point extraction behavior of aluminum(III) with 8-quinolinol (HQ) or 2-methyl-8-quinolinol (HMQ) and Triton X-100 was investigated in the absence and presence of 3,5-dichlorophenol (Hdcp). Aluminum(III) was almost extracted with HQ and 4(v/v)% Triton X-100 above pH 5.0, but was not extracted with HMQ-Triton X-100. However, in the presence of Hdcp, it was almost quantitatively extracted with HMQ-Triton X-100. The synergistic effect of Hdcp on the extraction of aluminum(III) with HMQ and Triton X-100 may be caused by the formation of a mixed-ligand complex, Al(dcp)(MQ)2.
Topical herbal therapies for treating osteoarthritis
Cameron, Melainie; Chrubasik, Sigrun
2014-01-01
Background Before extraction and synthetic chemistry were invented, musculoskeletal complaints were treated with preparations from medicinal plants. They were either administered orally or topically. In contrast to the oral medicinal plant products, topicals act in part as counterirritants or are toxic when given orally. Objectives To update the previous Cochrane review of herbal therapy for osteoarthritis from 2000 by evaluating the evidence on effectiveness for topical medicinal plant products. Search methods Databases for mainstream and complementary medicine were searched using terms to include all forms of arthritis combined with medicinal plant products. We searched electronic databases (Cochrane Central Register of Controlled Trials (CENTRAL),MEDLINE, EMBASE, AMED, CINAHL, ISI Web of Science, World Health Organization Clinical Trials Registry Platform) to February 2013, unrestricted by language. We also searched the reference lists from retrieved trials. Selection criteria Randomised controlled trials of herbal interventions used topically, compared with inert (placebo) or active controls, in people with osteoarthritis were included. Data collection and analysis Two review authors independently selected trials for inclusion, assessed the risk of bias of included studies and extracted data. Main results Seven studies (seven different medicinal plant interventions; 785 participants) were included. Single studies (five studies, six interventions) and non-comparable studies (two studies, one intervention) precluded pooling of results. Moderate evidence from a single study of 174 people with hand osteoarthritis indicated that treatment with Arnica extract gel probably results in similar benefits as treatment with ibuprofen (non-steroidal anti-inflammatory drug) with a similar number of adverse events. Mean pain in the ibuprofen group was 44.2 points on a 100 point scale; treatment with Arnica gel reduced the pain by 4 points after three weeks: mean difference (MD) −3.8 points (95% confidence intervals (CI) −10.1 to 2.5), absolute reduction 4% (10% reduction to 3% increase). Hand function was 7.5 points on a 30 point scale in the ibuprofen-treated group; treatment with Arnica gel reduced function by 0.4 points (MD −0.4, 95% CI −1.75 to 0.95), absolute improvement 1% (6% improvement to 3% decline)). Total adverse events were higher in the Arnica gel group (13% compared to 8% in the ibuprofen group): relative risk (RR) 1.65 (95% CI 0.72 to 3.76). Moderate quality evidence from a single trial of 99 people with knee osteoarthritis indicated that compared with placebo, Capsicum extract gel probably does not improve pain or knee function, and is commonly associated with treatment-related adverse events including skin irritation and a burning sensation. At four weeks follow-up, mean pain in the placebo group was 46 points on a 100 point scale; treatment with Capsicum extract reduced pain by 1 point (MD −1, 95%CI −6.8 to 4.8), absolute reduction of 1%(7%reduction to 5% increase). Mean knee function in the placebo group was 34.8 points on a 96 point scale at four weeks; treatment with Capsicum extract improved function by a mean of 2.6 points (MD −2.6, 95% CI −9.5 to 4.2), an absolute improvement of 3% (10% improvement to 4% decline). Adverse event rates were greater in the Capsicum extract group (80% compared with 20% in the placebo group, rate ratio 4.12, 95% CI 3.30 to 5.17). The number needed to treat to result in adverse events was 2 (95% CI 1 to 2). Moderate evidence from a single trial of 220 people with knee osteoarthritis suggested that comfrey extract gel probably improves pain without increasing adverse events. At three weeks, the mean pain in the placebo group was 83.5 points on a 100 point scale. Treatment with comfrey reduced pain by a mean of 41.5 points (MD −41.5, 95% CI −48 to −34), an absolute reduction of 42% (34% to 48% reduction). Function was not reported. Adverse events were similar: 6%(7/110) reported adverse events in the comfrey group compared with 14% (15/110) in the placebo group (RR 0.47, 95% CI 0.20 to 1.10). Although evidence from a single trial indicated that adhesive patches containing Chinese herbal mixtures FNZG and SJG may improve pain and function, the clinical applicability of these findings are uncertain because participants were only treated and followed up for seven days. We are also uncertain if other topical herbal products (Marhame-Mafasel compress, stinging nettle leaf) improve osteoarthritis symptoms due to the very low quality evidence from single trials. No serious side effects were reported. Authors’ conclusions Although the mechanism of action of the topical medicinal plant products provides a rationale basis for their use in the treatment of osteoarthritis, the quality and quantity of current research studies of effectiveness are insufficient. Arnica gel probably improves symptoms as effectively as a gel containing non-steroidal anti-inflammatory drug, but with no better (and possibly worse) adverse event profile. Comfrey extract gel probably improves pain, and Capsicum extract gel probably will not improve pain or function at the doses examined in this review. Further high quality, fully powered studies are required to confirm the trends of effectiveness identifed in studies so far. PMID:23728701
IN VIVO STUDIES AND STABILITY STUDY OF CLADOPHORA GLOMERATA EXTRACT AS A COSMETIC ACTIVE INGREDIENT.
Fabrowska, Joanna; Kapuscinska, Alicja; Leska, Boguslawa; Feliksik-Skrobich, Katarzyna; Nowak, Izabela
2017-03-01
Marine algae are widely used as cosmetics raw materials. Likewise, freshwater alga Cladophora glomerata may be a good source of fatty acids and others bioactive agents. The aims of this study was to find out if the addition of the extract from the freshwater C. glonerata affects the stability of prepared cosmetic emulsions and to investigate in vivo effects of the extract in cosmetic formulations on hydration and elasticity of human skin. Extract from the freshwater C. glonierata was obtained using supercritical fluid extraction (SFE). Two forms of O/W emulsions were prepared: placebo and emulsion containing 0.5% of Cladophora SFE extract. The stability of obtained emulsions was investigated by using Turbiscan Lab Expert. Emulsions were applied by .volunteers daily. Corneometer was used to evaluate skin hydration and cutometer to examine skin elasticity. Measurements were conducted at reference point (week 0) and after 1st, 2nd, 3rd and 4th week of application. The addition of Cladophora extract insignificantly affected stability of the emulsion. The extract from C. glomerata in the emulsion influenced the improvement of both skin hydration and its elasticity. Thus, freshwater C. glonierata extract prepared via SFE method may be considered as an effective cosmetic raw material used as a moisturizing and firming agent.
Eskandari, Meghdad; Samavati, Vahid
2015-01-01
A Box-Behnken design (BBD) was used to evaluate the effects of ultrasonic power, extraction time, extraction temperature, and water to raw material ratio on extraction yield of alcohol-insoluble polysaccharide of Althaea rosea leaf (ARLP). Purification was carried out by dialysis method. Chemical analysis of ARLP revealed contained 12.69 ± 0.48% moisture, 79.33 ± 0.51% total sugar, 3.82 ± 0.21% protein, 11.25 ± 0.37% uronic acid and 3.77 ± 0.15% ash. The response surface methodology (RSM) showed that the significant quadratic regression equation with high R(2) (=0.9997) was successfully fitted for extraction yield of ARLP as function of independent variables. The overall optimum region was found to be at the combined level of ultrasonic power 91.85 W, extraction time 29.94 min, extraction temperature 89.78 °C, and the ratio of water to raw material 28.77 (mL/g). At this optimum point, extraction yield of ARLP was 19.47 ± 0.41%. No significant (p>0.05) difference was found between the actual and predicted (19.30 ± 0.075%) values. The results demonstrated that ARLP had strong scavenging activities on DPPH and hydroxyl radicals. Copyright © 2014 Elsevier B.V. All rights reserved.
Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun
2015-01-01
Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording accuracy of approximately ± 0.1 mm, and is therefore suitable for clinical application. Certainly, further research is necessary to confirm the clinical applications of the method. PMID:26375800
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting
NASA Astrophysics Data System (ADS)
Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang
2016-12-01
Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.
NASA Astrophysics Data System (ADS)
Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.
2017-10-01
The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
Distance Measurements In X-Ray Pictures
NASA Astrophysics Data System (ADS)
Forsgren, Per-Ola
1987-10-01
In this paper, a measurement method for the distance between binary objects will be presented. It has been developed for a specific purpose, the evaluation of rheumatic disease, but should be useful also in other applications. It is based on a distance map in the area between binary objects. A skeleton is extracted from the distance map by searching for local maxima. The distance measure is based on the average of skelton points in a defined measurement area. An objective criterion for selection of measurement points on the skeleton is proposed. Preliminary results indicate that good repeatability is attained.
Natural food colourants derived from onion wastes: application in a yoghurt product.
Mourtzinos, Ioannis; Prodromidis, Prodromos; Grigorakis, Spyros; Makris, Dimitris P; Biliaderis, Costas G; Moschakis, Thomas
2018-06-10
The valorization of onion (Allium cepa) solid wastes, a 450,000 tonnes/year waste in Europe, by a green extraction method is presented. Polyphenols of onion solid wastes were extracted using eco-friendly solvents, such as water and glycerol. The 2-hydroxypropyl-β-cyclodextrin was also used as a co-solvent for the augmentation of the extraction yield. The process has been optimized by implementing a central composite face centered design of experiments, with two replicates in the central point, taking into consideration the following independent variables: glycerol concentration, cyclodextrin concentration and temperature. The assessment of the extraction model was based on two responses: the total pigment yield and the antiradical capacity. LC-MS analysis was also employed in order to identify polyphenols and colourants of the obtained extracts. The main polyphenols found were quercetin and quercetin derivatives and the main colourant was cyanidin 3-O-glucoside. The extract was also tested as a food colourant in a yoghurt matrix. The onion leaf extract was found to be a stable natural colourant and could be utilized as an alternative ingredient to synthetic coloring agents. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Cui, Qi; Wang, Li-Tao; Liu, Ju-Zhao; Wang, Hui-Mei; Guo, Na; Gu, Cheng-Bo; Fu, Yu-Jie
2017-09-01
A simple, green and efficient extraction method named modified-solvent free microwave extraction (M-SFME) was employed for the extraction of essential oils (EOs) from Amomun tsao-ko. The process of M-SFME was optimized with the prominent preponderance of such higher extraction yield (1.13%) than those of solvent free microwave extraction (SFME, 0.91%) and hydrodistillation (HD, 0.84%) under the optimal parameters. Thirty-four volatile substances representing 95.4% were identified. The IC 50 values of EOs determined by DPPH radical scavenging activity and β-carotene/linoleic acid bleaching assay were 5.27 and 0.63mg/ml. Furthermore, the EOs exhibited moderate to potent broad-spectrum antimicrobial activity against all tested strains including five gram-positive and two gram-negative bacteria (MIC: 2.94-5.86mg/ml). In general, M-SFME is a potential and desirable alternative for the extraction of EOs from aromatic herbs, and the EOs obtained from A. tsao-ko can be explored as a potent natural antimicrobial and antioxidant preservative ingredient in food industry from the technological and economical points of view. Copyright © 2017 Elsevier B.V. All rights reserved.
Sustained Subconjunctival Protein Delivery Using a Thermosetting Gel Delivery System
2010-01-01
Purpose: An effective treatment modality for posterior eye diseases would provide prolonged delivery of therapeutic agents, including macromolecules, to eye tissues using a safe and minimally invasive method. The goal of this study was to assess the ability of a thermosetting gel to deliver a fluorescently labeled protein, Alexa 647 ovalbumin, to the choroid and retina of rats following a single subconjunctival injection of the gel. Additional experiments were performed to compare in vitro to in vivo ovalbumin release rates from the gel. Methods: The ovalbumin content of the eye tissues was monitored by spectrophotometric assays of tissue extracts of Alexa 647 ovalbumin from dissected sclera, choroid, and retina at time points ranging from 2 h to 14 days. At the same time points, fluorescence microscopy images of tissue samples were also obtained. Measurement of intact ovalbumin was verified by LDS-PAGE analysis of the tissue extract solutions. In vitro release of Alexa 488 ovalbumin into 37°C PBS solutions from ovalbumin-loaded gel pellets was also monitored over time by spectrophotometric assay. In vivo ovalbumin release rates were determined by measurement of residual ovalbumin extracted from gel pellets removed from rat eyes at various time intervals. Results: Our results indicate that ovalbumin concentrations can be maintained at measurable levels in the sclera, choroid, and retina of rats for up to 14 days using the thermosetting gel delivery system. The concentration of ovalbumin exhibited a gradient that decreased from sclera to choroid and to retina. The in vitro release rate profiles were similar to the in vivo release profiles. Conclusions: Our findings suggest that the thermosetting gel system may be a feasible method for safe and convenient sustained delivery of proteins to choroidal and retinal tissue in the posterior segments of the eye. PMID:20148655
Preparing silica aerogel monoliths via a rapid supercritical extraction method.
Carroll, Mary K; Anderson, Ann M; Gorka, Caroline A
2014-02-28
A procedure for the fabrication of monolithic silica aerogels in eight hours or less via a rapid supercritical extraction process is described. The procedure requires 15-20 min of preparation time, during which a liquid precursor mixture is prepared and poured into wells of a metal mold that is placed between the platens of a hydraulic hot press, followed by several hours of processing within the hot press. The precursor solution consists of a 1.0:12.0:3.6:3.5 x 10(-3) molar ratio of tetramethylorthosilicate (TMOS):methanol:water:ammonia. In each well of the mold, a porous silica sol-gel matrix forms. As the temperature of the mold and its contents is increased, the pressure within the mold rises. After the temperature/pressure conditions surpass the supercritical point for the solvent within the pores of the matrix (in this case, a methanol/water mixture), the supercritical fluid is released, and monolithic aerogel remains within the wells of the mold. With the mold used in this procedure, cylindrical monoliths of 2.2 cm diameter and 1.9 cm height are produced. Aerogels formed by this rapid method have comparable properties (low bulk and skeletal density, high surface area, mesoporous morphology) to those prepared by other methods that involve either additional reaction steps or solvent extractions (lengthier processes that generate more chemical waste).The rapid supercritical extraction method can also be applied to the fabrication of aerogels based on other precursor recipes.
Preparing Silica Aerogel Monoliths via a Rapid Supercritical Extraction Method
Gorka, Caroline A.
2014-01-01
A procedure for the fabrication of monolithic silica aerogels in eight hours or less via a rapid supercritical extraction process is described. The procedure requires 15-20 min of preparation time, during which a liquid precursor mixture is prepared and poured into wells of a metal mold that is placed between the platens of a hydraulic hot press, followed by several hours of processing within the hot press. The precursor solution consists of a 1.0:12.0:3.6:3.5 x 10-3 molar ratio of tetramethylorthosilicate (TMOS):methanol:water:ammonia. In each well of the mold, a porous silica sol-gel matrix forms. As the temperature of the mold and its contents is increased, the pressure within the mold rises. After the temperature/pressure conditions surpass the supercritical point for the solvent within the pores of the matrix (in this case, a methanol/water mixture), the supercritical fluid is released, and monolithic aerogel remains within the wells of the mold. With the mold used in this procedure, cylindrical monoliths of 2.2 cm diameter and 1.9 cm height are produced. Aerogels formed by this rapid method have comparable properties (low bulk and skeletal density, high surface area, mesoporous morphology) to those prepared by other methods that involve either additional reaction steps or solvent extractions (lengthier processes that generate more chemical waste).The rapid supercritical extraction method can also be applied to the fabrication of aerogels based on other precursor recipes. PMID:24637334
NASA Astrophysics Data System (ADS)
Baillard, C.; Dissard, O.; Jamet, O.; Maître, H.
Above-ground analysis is a key point to the reconstruction of urban scenes, but it is a difficult task because of the diversity of the involved objects. We propose a new method to above-ground extraction from an aerial stereo pair, which does not require any assumption about object shape or nature. A Digital Surface Model is first produced by a stereoscopic matching stage preserving discontinuities, and then processed by a region-based Markovian classification algorithm. The produced above-ground areas are finally characterized as man-made or natural according to the grey level information. The quality of the results is assessed and discussed.
A fast image matching algorithm based on key points
NASA Astrophysics Data System (ADS)
Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng
2014-05-01
Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.
A dimension-wise analysis method for the structural-acoustic system with interval parameters
NASA Astrophysics Data System (ADS)
Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong
2017-04-01
The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.
NASA Astrophysics Data System (ADS)
Othman, Zetty Shafiqa; Hassan, Nur Hasyareeda; Zubairi, Saiful Irwan
2015-09-01
Deep eutectic solvents (DESs) are basically molten salts that interact by forming hydrogen bonds between two added components at a ratio where eutectic point reaches a melting point lower than that of each individual component. Their remarkable physicochemical properties (similar to ionic liquids) with remarkable green properties, low cost and easy handling make them a growing interest in many fields of research. Therefore, the objective of pursuing this study is to analyze the potential of alcohol-based DES as an extraction medium for rotenone extraction from Derris elliptica roots. DES was prepared by a combination of choline chloride, ChCl and 1, 4-butanediol at a ratio of 1/5. The structure of elucidation of DES was analyzed using FTIR, 1H-NMR and 13C-NMR. Normal soaking extraction (NSE) method was carried out for 14 hours using seven different types of solvent systems of (1) acetone; (2) methanol; (3) acetonitrile; (4) DES; (5) DES + methanol; (6) DES + acetonitrile; and (7) [BMIM] OTf + acetone. Next, the yield of rotenone, % (w/w), and its concentration (mg/ml) in dried roots were quantitatively determined by means of RP-HPLC. The results showed that a binary solvent system of [BMIM] OTf + acetone and DES + acetonitrile was the best solvent system combination as compared to other solvent systems. It contributed to the highest rotenone content of 0.84 ± 0.05% (w/w) (1.09 ± 0.06 mg/ml) and 0.84 ± 0.02% (w/w) (1.03 ± 0.01 mg/ml) after 14 hours of exhaustive extraction time. In conclusion, a combination of the DES with a selective organic solvent has been proven to have a similar potential and efficiency as of ILs in extracting bioactive constituents in the phytochemical extraction process.
Randomized Hough transform filter for echo extraction in DLR
NASA Astrophysics Data System (ADS)
Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You
2016-11-01
The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.
Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors
Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C.; Ude, Aleš; Ollero, Aníbal
2016-01-01
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results. PMID:27187413
Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors.
Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C; Ude, Aleš; Ollero, Aníbal
2016-05-14
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object's shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object's centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.
Unda-Calvo, Jessica; Martínez-Santos, Miren; Ruiz-Romera, Estilita
2017-04-01
In the present study, the physiologically based extraction test PBET (gastric and intestinal phases) and two chemical based extraction methods, the toxicity characteristic leaching procedure (TCLP) and the sequential extraction procedure BCR 701 (Community Bureau of Reference of the European Commission) have been used to estimate and evaluate the bioaccessibility of metals (Fe, Mn, Zn, Cu, Ni, Cr and Pb) in sediments from the Deba River urban catchment. The statistical analysis of data and comparison among physiological and chemical methods have highlighted the relevance of simulate the gastrointestinal tract environment since metal bioaccessibility seems to depend on water and sediment properties such as pH, redox potential and organic matter content, and, primordially, on the form in which metals are present in the sediment. Indeed, metals distributed among all fractions (Mn, Ni, Zn) were the most bioaccessible, followed by those predominantly bound to oxidizable fraction (Cu, Cr and Pb), especially near major urban areas. Finally, a toxicological risk assessment was also performed by determining the hazard quotient (HQ), which demonstrated that, although sediments from mid- and downstream sampling points presented the highest metal bioaccessibilities, were not enough to have adverse effects on human health, Cr being the most potentially toxic element. Copyright © 2017 Elsevier Inc. All rights reserved.